00:00:00.001 Started by upstream project "autotest-per-patch" build number 132772 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.050 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.051 The recommended git tool is: git 00:00:00.051 using credential 00000000-0000-0000-0000-000000000002 00:00:00.053 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.092 Fetching changes from the remote Git repository 00:00:00.096 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.153 Using shallow fetch with depth 1 00:00:00.153 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.153 > git --version # timeout=10 00:00:00.228 > git --version # 'git version 2.39.2' 00:00:00.229 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.268 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.268 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.190 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.202 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.215 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.215 > git config core.sparsecheckout # timeout=10 00:00:05.227 > git read-tree -mu HEAD # timeout=10 00:00:05.243 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.269 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.269 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.387 [Pipeline] Start of Pipeline 00:00:05.399 [Pipeline] library 00:00:05.400 Loading library shm_lib@master 00:00:05.400 Library shm_lib@master is cached. Copying from home. 00:00:05.418 [Pipeline] node 00:00:05.458 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.460 [Pipeline] { 00:00:05.471 [Pipeline] catchError 00:00:05.472 [Pipeline] { 00:00:05.486 [Pipeline] wrap 00:00:05.493 [Pipeline] { 00:00:05.502 [Pipeline] stage 00:00:05.506 [Pipeline] { (Prologue) 00:00:05.772 [Pipeline] sh 00:00:06.053 + logger -p user.info -t JENKINS-CI 00:00:06.070 [Pipeline] echo 00:00:06.071 Node: WFP8 00:00:06.077 [Pipeline] sh 00:00:06.371 [Pipeline] setCustomBuildProperty 00:00:06.381 [Pipeline] echo 00:00:06.383 Cleanup processes 00:00:06.390 [Pipeline] sh 00:00:06.678 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.678 3304554 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.689 [Pipeline] sh 00:00:06.970 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.970 ++ grep -v 'sudo pgrep' 00:00:06.970 ++ awk '{print $1}' 00:00:06.970 + sudo kill -9 00:00:06.970 + true 00:00:06.984 [Pipeline] cleanWs 00:00:06.993 [WS-CLEANUP] Deleting project workspace... 00:00:06.993 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.999 [WS-CLEANUP] done 00:00:07.001 [Pipeline] setCustomBuildProperty 00:00:07.013 [Pipeline] sh 00:00:07.294 + sudo git config --global --replace-all safe.directory '*' 00:00:07.381 [Pipeline] httpRequest 00:00:08.063 [Pipeline] echo 00:00:08.064 Sorcerer 10.211.164.101 is alive 00:00:08.072 [Pipeline] retry 00:00:08.074 [Pipeline] { 00:00:08.083 [Pipeline] httpRequest 00:00:08.087 HttpMethod: GET 00:00:08.087 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.088 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.098 Response Code: HTTP/1.1 200 OK 00:00:08.098 Success: Status code 200 is in the accepted range: 200,404 00:00:08.099 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.136 [Pipeline] } 00:00:10.149 [Pipeline] // retry 00:00:10.155 [Pipeline] sh 00:00:10.441 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.455 [Pipeline] httpRequest 00:00:10.787 [Pipeline] echo 00:00:10.789 Sorcerer 10.211.164.101 is alive 00:00:10.799 [Pipeline] retry 00:00:10.801 [Pipeline] { 00:00:10.817 [Pipeline] httpRequest 00:00:10.823 HttpMethod: GET 00:00:10.823 URL: http://10.211.164.101/packages/spdk_421ce385490f8ab551e525b6e5086b5608a87772.tar.gz 00:00:10.824 Sending request to url: http://10.211.164.101/packages/spdk_421ce385490f8ab551e525b6e5086b5608a87772.tar.gz 00:00:10.826 Response Code: HTTP/1.1 200 OK 00:00:10.826 Success: Status code 200 is in the accepted range: 200,404 00:00:10.826 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_421ce385490f8ab551e525b6e5086b5608a87772.tar.gz 00:00:31.197 [Pipeline] } 00:00:31.215 [Pipeline] // retry 00:00:31.223 [Pipeline] sh 00:00:31.511 + tar --no-same-owner -xf spdk_421ce385490f8ab551e525b6e5086b5608a87772.tar.gz 00:00:34.064 [Pipeline] sh 00:00:34.350 + git -C spdk log --oneline -n5 00:00:34.350 421ce3854 env: add mem_map_fini and vtophys_fini to cleanup mem maps 00:00:34.350 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:00:34.350 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:00:34.350 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:00:34.350 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:00:34.361 [Pipeline] } 00:00:34.376 [Pipeline] // stage 00:00:34.384 [Pipeline] stage 00:00:34.387 [Pipeline] { (Prepare) 00:00:34.404 [Pipeline] writeFile 00:00:34.419 [Pipeline] sh 00:00:34.703 + logger -p user.info -t JENKINS-CI 00:00:34.717 [Pipeline] sh 00:00:35.007 + logger -p user.info -t JENKINS-CI 00:00:35.019 [Pipeline] sh 00:00:35.304 + cat autorun-spdk.conf 00:00:35.304 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.304 SPDK_TEST_NVMF=1 00:00:35.304 SPDK_TEST_NVME_CLI=1 00:00:35.304 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:35.304 SPDK_TEST_NVMF_NICS=e810 00:00:35.304 SPDK_TEST_VFIOUSER=1 00:00:35.304 SPDK_RUN_UBSAN=1 00:00:35.304 NET_TYPE=phy 00:00:35.312 RUN_NIGHTLY=0 00:00:35.317 [Pipeline] readFile 00:00:35.348 [Pipeline] withEnv 00:00:35.350 [Pipeline] { 00:00:35.365 [Pipeline] sh 00:00:35.653 + set -ex 00:00:35.653 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:35.653 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:35.653 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.653 ++ SPDK_TEST_NVMF=1 00:00:35.653 ++ SPDK_TEST_NVME_CLI=1 00:00:35.653 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:35.653 ++ SPDK_TEST_NVMF_NICS=e810 00:00:35.653 ++ SPDK_TEST_VFIOUSER=1 00:00:35.653 ++ SPDK_RUN_UBSAN=1 00:00:35.653 ++ NET_TYPE=phy 00:00:35.653 ++ RUN_NIGHTLY=0 00:00:35.653 + case $SPDK_TEST_NVMF_NICS in 00:00:35.653 + DRIVERS=ice 00:00:35.653 + [[ tcp == \r\d\m\a ]] 00:00:35.653 + [[ -n ice ]] 00:00:35.653 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:35.653 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:35.653 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:35.653 rmmod: ERROR: Module irdma is not currently loaded 00:00:35.653 rmmod: ERROR: Module i40iw is not currently loaded 00:00:35.653 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:35.653 + true 00:00:35.653 + for D in $DRIVERS 00:00:35.653 + sudo modprobe ice 00:00:35.653 + exit 0 00:00:35.663 [Pipeline] } 00:00:35.679 [Pipeline] // withEnv 00:00:35.684 [Pipeline] } 00:00:35.716 [Pipeline] // stage 00:00:35.749 [Pipeline] catchError 00:00:35.750 [Pipeline] { 00:00:35.758 [Pipeline] timeout 00:00:35.759 Timeout set to expire in 1 hr 0 min 00:00:35.760 [Pipeline] { 00:00:35.769 [Pipeline] stage 00:00:35.771 [Pipeline] { (Tests) 00:00:35.781 [Pipeline] sh 00:00:36.061 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:36.061 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:36.061 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:36.061 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:36.061 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:36.061 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:36.061 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:36.061 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:36.061 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:36.061 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:36.061 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:36.061 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:36.061 + source /etc/os-release 00:00:36.061 ++ NAME='Fedora Linux' 00:00:36.061 ++ VERSION='39 (Cloud Edition)' 00:00:36.061 ++ ID=fedora 00:00:36.061 ++ VERSION_ID=39 00:00:36.061 ++ VERSION_CODENAME= 00:00:36.061 ++ PLATFORM_ID=platform:f39 00:00:36.061 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:36.061 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:36.061 ++ LOGO=fedora-logo-icon 00:00:36.061 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:36.061 ++ HOME_URL=https://fedoraproject.org/ 00:00:36.061 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:36.061 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:36.061 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:36.061 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:36.061 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:36.061 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:36.061 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:36.061 ++ SUPPORT_END=2024-11-12 00:00:36.061 ++ VARIANT='Cloud Edition' 00:00:36.061 ++ VARIANT_ID=cloud 00:00:36.061 + uname -a 00:00:36.061 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:36.061 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:37.970 Hugepages 00:00:37.970 node hugesize free / total 00:00:37.970 node0 1048576kB 0 / 0 00:00:37.970 node0 2048kB 0 / 0 00:00:37.970 node1 1048576kB 0 / 0 00:00:37.970 node1 2048kB 0 / 0 00:00:37.970 00:00:37.970 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:37.970 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:37.970 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:37.970 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:37.970 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:37.970 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:37.970 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:38.229 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:38.229 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:38.229 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:38.229 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:38.229 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:38.229 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:38.229 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:38.229 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:38.229 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:38.229 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:38.229 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:38.229 + rm -f /tmp/spdk-ld-path 00:00:38.229 + source autorun-spdk.conf 00:00:38.229 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.229 ++ SPDK_TEST_NVMF=1 00:00:38.229 ++ SPDK_TEST_NVME_CLI=1 00:00:38.229 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.229 ++ SPDK_TEST_NVMF_NICS=e810 00:00:38.229 ++ SPDK_TEST_VFIOUSER=1 00:00:38.229 ++ SPDK_RUN_UBSAN=1 00:00:38.229 ++ NET_TYPE=phy 00:00:38.229 ++ RUN_NIGHTLY=0 00:00:38.229 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:38.229 + [[ -n '' ]] 00:00:38.229 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:38.229 + for M in /var/spdk/build-*-manifest.txt 00:00:38.229 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:38.229 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:38.229 + for M in /var/spdk/build-*-manifest.txt 00:00:38.229 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:38.229 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:38.229 + for M in /var/spdk/build-*-manifest.txt 00:00:38.229 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:38.229 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:38.229 ++ uname 00:00:38.229 + [[ Linux == \L\i\n\u\x ]] 00:00:38.229 + sudo dmesg -T 00:00:38.229 + sudo dmesg --clear 00:00:38.229 + dmesg_pid=3305475 00:00:38.229 + [[ Fedora Linux == FreeBSD ]] 00:00:38.229 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:38.229 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:38.229 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:38.229 + [[ -x /usr/src/fio-static/fio ]] 00:00:38.229 + export FIO_BIN=/usr/src/fio-static/fio 00:00:38.229 + FIO_BIN=/usr/src/fio-static/fio 00:00:38.229 + sudo dmesg -Tw 00:00:38.229 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:38.229 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:38.229 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:38.229 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:38.229 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:38.229 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:38.229 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:38.229 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:38.229 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:38.489 04:55:14 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:38.489 04:55:14 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:38.489 04:55:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.490 04:55:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:00:38.490 04:55:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:00:38.490 04:55:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.490 04:55:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:00:38.490 04:55:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:00:38.490 04:55:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:00:38.490 04:55:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:00:38.490 04:55:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:00:38.490 04:55:14 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:00:38.490 04:55:14 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:38.490 04:55:14 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:38.490 04:55:14 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:38.490 04:55:14 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:38.490 04:55:14 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:38.490 04:55:14 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:38.490 04:55:14 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:38.490 04:55:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.490 04:55:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.490 04:55:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.490 04:55:14 -- paths/export.sh@5 -- $ export PATH 00:00:38.490 04:55:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.490 04:55:14 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:38.490 04:55:14 -- common/autobuild_common.sh@493 -- $ date +%s 00:00:38.490 04:55:14 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733716514.XXXXXX 00:00:38.490 04:55:14 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733716514.KZiDex 00:00:38.490 04:55:14 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:00:38.490 04:55:14 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:00:38.490 04:55:14 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:38.490 04:55:14 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:38.490 04:55:14 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:38.490 04:55:14 -- common/autobuild_common.sh@509 -- $ get_config_params 00:00:38.490 04:55:14 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:00:38.490 04:55:14 -- common/autotest_common.sh@10 -- $ set +x 00:00:38.490 04:55:15 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:38.490 04:55:15 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:00:38.490 04:55:15 -- pm/common@17 -- $ local monitor 00:00:38.490 04:55:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.490 04:55:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.490 04:55:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.490 04:55:15 -- pm/common@21 -- $ date +%s 00:00:38.490 04:55:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.490 04:55:15 -- pm/common@21 -- $ date +%s 00:00:38.490 04:55:15 -- pm/common@25 -- $ sleep 1 00:00:38.490 04:55:15 -- pm/common@21 -- $ date +%s 00:00:38.490 04:55:15 -- pm/common@21 -- $ date +%s 00:00:38.490 04:55:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733716515 00:00:38.490 04:55:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733716515 00:00:38.490 04:55:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733716515 00:00:38.490 04:55:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733716515 00:00:38.490 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733716515_collect-vmstat.pm.log 00:00:38.490 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733716515_collect-cpu-load.pm.log 00:00:38.490 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733716515_collect-cpu-temp.pm.log 00:00:38.490 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733716515_collect-bmc-pm.bmc.pm.log 00:00:39.436 04:55:16 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:00:39.436 04:55:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:39.436 04:55:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:39.436 04:55:16 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:39.436 04:55:16 -- spdk/autobuild.sh@16 -- $ date -u 00:00:39.436 Mon Dec 9 03:55:16 AM UTC 2024 00:00:39.436 04:55:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:39.436 v25.01-pre-277-g421ce3854 00:00:39.436 04:55:16 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:39.436 04:55:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:39.436 04:55:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:39.436 04:55:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:00:39.436 04:55:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:00:39.436 04:55:16 -- common/autotest_common.sh@10 -- $ set +x 00:00:39.695 ************************************ 00:00:39.695 START TEST ubsan 00:00:39.695 ************************************ 00:00:39.695 04:55:16 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:00:39.695 using ubsan 00:00:39.695 00:00:39.695 real 0m0.000s 00:00:39.695 user 0m0.000s 00:00:39.695 sys 0m0.000s 00:00:39.695 04:55:16 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:00:39.695 04:55:16 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:39.695 ************************************ 00:00:39.695 END TEST ubsan 00:00:39.695 ************************************ 00:00:39.695 04:55:16 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:39.695 04:55:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:39.695 04:55:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:39.695 04:55:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:39.695 04:55:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:39.695 04:55:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:39.695 04:55:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:39.695 04:55:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:39.696 04:55:16 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:39.696 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:39.696 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:39.955 Using 'verbs' RDMA provider 00:00:53.115 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:03.098 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:03.710 Creating mk/config.mk...done. 00:01:03.710 Creating mk/cc.flags.mk...done. 00:01:03.710 Type 'make' to build. 00:01:03.710 04:55:40 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:03.710 04:55:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:03.710 04:55:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:03.710 04:55:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:04.009 ************************************ 00:01:04.009 START TEST make 00:01:04.009 ************************************ 00:01:04.009 04:55:40 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:04.270 make[1]: Nothing to be done for 'all'. 00:01:05.660 The Meson build system 00:01:05.660 Version: 1.5.0 00:01:05.660 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:05.660 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:05.660 Build type: native build 00:01:05.660 Project name: libvfio-user 00:01:05.660 Project version: 0.0.1 00:01:05.660 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:05.660 C linker for the host machine: cc ld.bfd 2.40-14 00:01:05.660 Host machine cpu family: x86_64 00:01:05.660 Host machine cpu: x86_64 00:01:05.660 Run-time dependency threads found: YES 00:01:05.660 Library dl found: YES 00:01:05.660 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:05.660 Run-time dependency json-c found: YES 0.17 00:01:05.660 Run-time dependency cmocka found: YES 1.1.7 00:01:05.660 Program pytest-3 found: NO 00:01:05.660 Program flake8 found: NO 00:01:05.660 Program misspell-fixer found: NO 00:01:05.661 Program restructuredtext-lint found: NO 00:01:05.661 Program valgrind found: YES (/usr/bin/valgrind) 00:01:05.661 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:05.661 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:05.661 Compiler for C supports arguments -Wwrite-strings: YES 00:01:05.661 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:05.661 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:05.661 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:05.661 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:05.661 Build targets in project: 8 00:01:05.661 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:05.661 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:05.661 00:01:05.661 libvfio-user 0.0.1 00:01:05.661 00:01:05.661 User defined options 00:01:05.661 buildtype : debug 00:01:05.661 default_library: shared 00:01:05.661 libdir : /usr/local/lib 00:01:05.661 00:01:05.661 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:05.918 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:06.176 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:06.176 [2/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:06.176 [3/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:06.176 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:06.176 [5/37] Compiling C object samples/null.p/null.c.o 00:01:06.176 [6/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:06.176 [7/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:06.176 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:06.176 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:06.176 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:06.176 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:06.176 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:06.176 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:06.176 [14/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:06.176 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:06.176 [16/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:06.176 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:06.176 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:06.176 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:06.176 [20/37] Compiling C object samples/server.p/server.c.o 00:01:06.176 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:06.176 [22/37] Compiling C object samples/client.p/client.c.o 00:01:06.176 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:06.176 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:06.176 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:06.176 [26/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:06.176 [27/37] Linking target samples/client 00:01:06.176 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:06.176 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:06.433 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:06.433 [31/37] Linking target test/unit_tests 00:01:06.433 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:06.433 [33/37] Linking target samples/gpio-pci-idio-16 00:01:06.433 [34/37] Linking target samples/lspci 00:01:06.433 [35/37] Linking target samples/server 00:01:06.433 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:06.433 [37/37] Linking target samples/null 00:01:06.433 INFO: autodetecting backend as ninja 00:01:06.433 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:06.691 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:06.948 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:06.948 ninja: no work to do. 00:01:12.217 The Meson build system 00:01:12.217 Version: 1.5.0 00:01:12.217 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:12.217 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:12.217 Build type: native build 00:01:12.217 Program cat found: YES (/usr/bin/cat) 00:01:12.217 Project name: DPDK 00:01:12.217 Project version: 24.03.0 00:01:12.217 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:12.217 C linker for the host machine: cc ld.bfd 2.40-14 00:01:12.217 Host machine cpu family: x86_64 00:01:12.217 Host machine cpu: x86_64 00:01:12.217 Message: ## Building in Developer Mode ## 00:01:12.217 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:12.217 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:12.217 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:12.217 Program python3 found: YES (/usr/bin/python3) 00:01:12.217 Program cat found: YES (/usr/bin/cat) 00:01:12.217 Compiler for C supports arguments -march=native: YES 00:01:12.217 Checking for size of "void *" : 8 00:01:12.217 Checking for size of "void *" : 8 (cached) 00:01:12.217 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:12.217 Library m found: YES 00:01:12.217 Library numa found: YES 00:01:12.217 Has header "numaif.h" : YES 00:01:12.217 Library fdt found: NO 00:01:12.217 Library execinfo found: NO 00:01:12.217 Has header "execinfo.h" : YES 00:01:12.217 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:12.217 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:12.217 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:12.217 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:12.217 Run-time dependency openssl found: YES 3.1.1 00:01:12.217 Run-time dependency libpcap found: YES 1.10.4 00:01:12.217 Has header "pcap.h" with dependency libpcap: YES 00:01:12.217 Compiler for C supports arguments -Wcast-qual: YES 00:01:12.217 Compiler for C supports arguments -Wdeprecated: YES 00:01:12.217 Compiler for C supports arguments -Wformat: YES 00:01:12.217 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:12.217 Compiler for C supports arguments -Wformat-security: NO 00:01:12.217 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:12.217 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:12.217 Compiler for C supports arguments -Wnested-externs: YES 00:01:12.217 Compiler for C supports arguments -Wold-style-definition: YES 00:01:12.217 Compiler for C supports arguments -Wpointer-arith: YES 00:01:12.217 Compiler for C supports arguments -Wsign-compare: YES 00:01:12.217 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:12.217 Compiler for C supports arguments -Wundef: YES 00:01:12.217 Compiler for C supports arguments -Wwrite-strings: YES 00:01:12.217 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:12.217 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:12.217 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:12.217 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:12.217 Program objdump found: YES (/usr/bin/objdump) 00:01:12.217 Compiler for C supports arguments -mavx512f: YES 00:01:12.217 Checking if "AVX512 checking" compiles: YES 00:01:12.217 Fetching value of define "__SSE4_2__" : 1 00:01:12.217 Fetching value of define "__AES__" : 1 00:01:12.217 Fetching value of define "__AVX__" : 1 00:01:12.217 Fetching value of define "__AVX2__" : 1 00:01:12.217 Fetching value of define "__AVX512BW__" : 1 00:01:12.217 Fetching value of define "__AVX512CD__" : 1 00:01:12.217 Fetching value of define "__AVX512DQ__" : 1 00:01:12.217 Fetching value of define "__AVX512F__" : 1 00:01:12.217 Fetching value of define "__AVX512VL__" : 1 00:01:12.217 Fetching value of define "__PCLMUL__" : 1 00:01:12.217 Fetching value of define "__RDRND__" : 1 00:01:12.217 Fetching value of define "__RDSEED__" : 1 00:01:12.217 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:12.217 Fetching value of define "__znver1__" : (undefined) 00:01:12.217 Fetching value of define "__znver2__" : (undefined) 00:01:12.217 Fetching value of define "__znver3__" : (undefined) 00:01:12.217 Fetching value of define "__znver4__" : (undefined) 00:01:12.217 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:12.217 Message: lib/log: Defining dependency "log" 00:01:12.217 Message: lib/kvargs: Defining dependency "kvargs" 00:01:12.217 Message: lib/telemetry: Defining dependency "telemetry" 00:01:12.217 Checking for function "getentropy" : NO 00:01:12.217 Message: lib/eal: Defining dependency "eal" 00:01:12.217 Message: lib/ring: Defining dependency "ring" 00:01:12.217 Message: lib/rcu: Defining dependency "rcu" 00:01:12.217 Message: lib/mempool: Defining dependency "mempool" 00:01:12.217 Message: lib/mbuf: Defining dependency "mbuf" 00:01:12.217 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:12.217 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:12.217 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:12.217 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:12.217 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:12.217 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:12.217 Compiler for C supports arguments -mpclmul: YES 00:01:12.217 Compiler for C supports arguments -maes: YES 00:01:12.217 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:12.217 Compiler for C supports arguments -mavx512bw: YES 00:01:12.217 Compiler for C supports arguments -mavx512dq: YES 00:01:12.217 Compiler for C supports arguments -mavx512vl: YES 00:01:12.217 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:12.217 Compiler for C supports arguments -mavx2: YES 00:01:12.217 Compiler for C supports arguments -mavx: YES 00:01:12.217 Message: lib/net: Defining dependency "net" 00:01:12.217 Message: lib/meter: Defining dependency "meter" 00:01:12.217 Message: lib/ethdev: Defining dependency "ethdev" 00:01:12.217 Message: lib/pci: Defining dependency "pci" 00:01:12.217 Message: lib/cmdline: Defining dependency "cmdline" 00:01:12.217 Message: lib/hash: Defining dependency "hash" 00:01:12.217 Message: lib/timer: Defining dependency "timer" 00:01:12.217 Message: lib/compressdev: Defining dependency "compressdev" 00:01:12.217 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:12.217 Message: lib/dmadev: Defining dependency "dmadev" 00:01:12.217 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:12.217 Message: lib/power: Defining dependency "power" 00:01:12.217 Message: lib/reorder: Defining dependency "reorder" 00:01:12.217 Message: lib/security: Defining dependency "security" 00:01:12.217 Has header "linux/userfaultfd.h" : YES 00:01:12.217 Has header "linux/vduse.h" : YES 00:01:12.217 Message: lib/vhost: Defining dependency "vhost" 00:01:12.217 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:12.217 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:12.217 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:12.217 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:12.217 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:12.217 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:12.217 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:12.217 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:12.217 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:12.217 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:12.218 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:12.218 Configuring doxy-api-html.conf using configuration 00:01:12.218 Configuring doxy-api-man.conf using configuration 00:01:12.218 Program mandb found: YES (/usr/bin/mandb) 00:01:12.218 Program sphinx-build found: NO 00:01:12.218 Configuring rte_build_config.h using configuration 00:01:12.218 Message: 00:01:12.218 ================= 00:01:12.218 Applications Enabled 00:01:12.218 ================= 00:01:12.218 00:01:12.218 apps: 00:01:12.218 00:01:12.218 00:01:12.218 Message: 00:01:12.218 ================= 00:01:12.218 Libraries Enabled 00:01:12.218 ================= 00:01:12.218 00:01:12.218 libs: 00:01:12.218 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:12.218 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:12.218 cryptodev, dmadev, power, reorder, security, vhost, 00:01:12.218 00:01:12.218 Message: 00:01:12.218 =============== 00:01:12.218 Drivers Enabled 00:01:12.218 =============== 00:01:12.218 00:01:12.218 common: 00:01:12.218 00:01:12.218 bus: 00:01:12.218 pci, vdev, 00:01:12.218 mempool: 00:01:12.218 ring, 00:01:12.218 dma: 00:01:12.218 00:01:12.218 net: 00:01:12.218 00:01:12.218 crypto: 00:01:12.218 00:01:12.218 compress: 00:01:12.218 00:01:12.218 vdpa: 00:01:12.218 00:01:12.218 00:01:12.218 Message: 00:01:12.218 ================= 00:01:12.218 Content Skipped 00:01:12.218 ================= 00:01:12.218 00:01:12.218 apps: 00:01:12.218 dumpcap: explicitly disabled via build config 00:01:12.218 graph: explicitly disabled via build config 00:01:12.218 pdump: explicitly disabled via build config 00:01:12.218 proc-info: explicitly disabled via build config 00:01:12.218 test-acl: explicitly disabled via build config 00:01:12.218 test-bbdev: explicitly disabled via build config 00:01:12.218 test-cmdline: explicitly disabled via build config 00:01:12.218 test-compress-perf: explicitly disabled via build config 00:01:12.218 test-crypto-perf: explicitly disabled via build config 00:01:12.218 test-dma-perf: explicitly disabled via build config 00:01:12.218 test-eventdev: explicitly disabled via build config 00:01:12.218 test-fib: explicitly disabled via build config 00:01:12.218 test-flow-perf: explicitly disabled via build config 00:01:12.218 test-gpudev: explicitly disabled via build config 00:01:12.218 test-mldev: explicitly disabled via build config 00:01:12.218 test-pipeline: explicitly disabled via build config 00:01:12.218 test-pmd: explicitly disabled via build config 00:01:12.218 test-regex: explicitly disabled via build config 00:01:12.218 test-sad: explicitly disabled via build config 00:01:12.218 test-security-perf: explicitly disabled via build config 00:01:12.218 00:01:12.218 libs: 00:01:12.218 argparse: explicitly disabled via build config 00:01:12.218 metrics: explicitly disabled via build config 00:01:12.218 acl: explicitly disabled via build config 00:01:12.218 bbdev: explicitly disabled via build config 00:01:12.218 bitratestats: explicitly disabled via build config 00:01:12.218 bpf: explicitly disabled via build config 00:01:12.218 cfgfile: explicitly disabled via build config 00:01:12.218 distributor: explicitly disabled via build config 00:01:12.218 efd: explicitly disabled via build config 00:01:12.218 eventdev: explicitly disabled via build config 00:01:12.218 dispatcher: explicitly disabled via build config 00:01:12.218 gpudev: explicitly disabled via build config 00:01:12.218 gro: explicitly disabled via build config 00:01:12.218 gso: explicitly disabled via build config 00:01:12.218 ip_frag: explicitly disabled via build config 00:01:12.218 jobstats: explicitly disabled via build config 00:01:12.218 latencystats: explicitly disabled via build config 00:01:12.218 lpm: explicitly disabled via build config 00:01:12.218 member: explicitly disabled via build config 00:01:12.218 pcapng: explicitly disabled via build config 00:01:12.218 rawdev: explicitly disabled via build config 00:01:12.218 regexdev: explicitly disabled via build config 00:01:12.218 mldev: explicitly disabled via build config 00:01:12.218 rib: explicitly disabled via build config 00:01:12.218 sched: explicitly disabled via build config 00:01:12.218 stack: explicitly disabled via build config 00:01:12.218 ipsec: explicitly disabled via build config 00:01:12.218 pdcp: explicitly disabled via build config 00:01:12.218 fib: explicitly disabled via build config 00:01:12.218 port: explicitly disabled via build config 00:01:12.218 pdump: explicitly disabled via build config 00:01:12.218 table: explicitly disabled via build config 00:01:12.218 pipeline: explicitly disabled via build config 00:01:12.218 graph: explicitly disabled via build config 00:01:12.218 node: explicitly disabled via build config 00:01:12.218 00:01:12.218 drivers: 00:01:12.218 common/cpt: not in enabled drivers build config 00:01:12.218 common/dpaax: not in enabled drivers build config 00:01:12.218 common/iavf: not in enabled drivers build config 00:01:12.218 common/idpf: not in enabled drivers build config 00:01:12.218 common/ionic: not in enabled drivers build config 00:01:12.218 common/mvep: not in enabled drivers build config 00:01:12.218 common/octeontx: not in enabled drivers build config 00:01:12.218 bus/auxiliary: not in enabled drivers build config 00:01:12.218 bus/cdx: not in enabled drivers build config 00:01:12.218 bus/dpaa: not in enabled drivers build config 00:01:12.218 bus/fslmc: not in enabled drivers build config 00:01:12.218 bus/ifpga: not in enabled drivers build config 00:01:12.218 bus/platform: not in enabled drivers build config 00:01:12.218 bus/uacce: not in enabled drivers build config 00:01:12.218 bus/vmbus: not in enabled drivers build config 00:01:12.218 common/cnxk: not in enabled drivers build config 00:01:12.218 common/mlx5: not in enabled drivers build config 00:01:12.218 common/nfp: not in enabled drivers build config 00:01:12.218 common/nitrox: not in enabled drivers build config 00:01:12.218 common/qat: not in enabled drivers build config 00:01:12.218 common/sfc_efx: not in enabled drivers build config 00:01:12.218 mempool/bucket: not in enabled drivers build config 00:01:12.218 mempool/cnxk: not in enabled drivers build config 00:01:12.218 mempool/dpaa: not in enabled drivers build config 00:01:12.218 mempool/dpaa2: not in enabled drivers build config 00:01:12.218 mempool/octeontx: not in enabled drivers build config 00:01:12.218 mempool/stack: not in enabled drivers build config 00:01:12.218 dma/cnxk: not in enabled drivers build config 00:01:12.218 dma/dpaa: not in enabled drivers build config 00:01:12.218 dma/dpaa2: not in enabled drivers build config 00:01:12.218 dma/hisilicon: not in enabled drivers build config 00:01:12.218 dma/idxd: not in enabled drivers build config 00:01:12.218 dma/ioat: not in enabled drivers build config 00:01:12.218 dma/skeleton: not in enabled drivers build config 00:01:12.218 net/af_packet: not in enabled drivers build config 00:01:12.218 net/af_xdp: not in enabled drivers build config 00:01:12.218 net/ark: not in enabled drivers build config 00:01:12.218 net/atlantic: not in enabled drivers build config 00:01:12.218 net/avp: not in enabled drivers build config 00:01:12.218 net/axgbe: not in enabled drivers build config 00:01:12.218 net/bnx2x: not in enabled drivers build config 00:01:12.218 net/bnxt: not in enabled drivers build config 00:01:12.218 net/bonding: not in enabled drivers build config 00:01:12.218 net/cnxk: not in enabled drivers build config 00:01:12.218 net/cpfl: not in enabled drivers build config 00:01:12.218 net/cxgbe: not in enabled drivers build config 00:01:12.218 net/dpaa: not in enabled drivers build config 00:01:12.218 net/dpaa2: not in enabled drivers build config 00:01:12.218 net/e1000: not in enabled drivers build config 00:01:12.218 net/ena: not in enabled drivers build config 00:01:12.218 net/enetc: not in enabled drivers build config 00:01:12.218 net/enetfec: not in enabled drivers build config 00:01:12.218 net/enic: not in enabled drivers build config 00:01:12.218 net/failsafe: not in enabled drivers build config 00:01:12.218 net/fm10k: not in enabled drivers build config 00:01:12.218 net/gve: not in enabled drivers build config 00:01:12.218 net/hinic: not in enabled drivers build config 00:01:12.218 net/hns3: not in enabled drivers build config 00:01:12.218 net/i40e: not in enabled drivers build config 00:01:12.218 net/iavf: not in enabled drivers build config 00:01:12.218 net/ice: not in enabled drivers build config 00:01:12.218 net/idpf: not in enabled drivers build config 00:01:12.218 net/igc: not in enabled drivers build config 00:01:12.218 net/ionic: not in enabled drivers build config 00:01:12.218 net/ipn3ke: not in enabled drivers build config 00:01:12.218 net/ixgbe: not in enabled drivers build config 00:01:12.218 net/mana: not in enabled drivers build config 00:01:12.218 net/memif: not in enabled drivers build config 00:01:12.218 net/mlx4: not in enabled drivers build config 00:01:12.218 net/mlx5: not in enabled drivers build config 00:01:12.218 net/mvneta: not in enabled drivers build config 00:01:12.218 net/mvpp2: not in enabled drivers build config 00:01:12.218 net/netvsc: not in enabled drivers build config 00:01:12.218 net/nfb: not in enabled drivers build config 00:01:12.218 net/nfp: not in enabled drivers build config 00:01:12.218 net/ngbe: not in enabled drivers build config 00:01:12.218 net/null: not in enabled drivers build config 00:01:12.218 net/octeontx: not in enabled drivers build config 00:01:12.218 net/octeon_ep: not in enabled drivers build config 00:01:12.218 net/pcap: not in enabled drivers build config 00:01:12.218 net/pfe: not in enabled drivers build config 00:01:12.218 net/qede: not in enabled drivers build config 00:01:12.218 net/ring: not in enabled drivers build config 00:01:12.218 net/sfc: not in enabled drivers build config 00:01:12.218 net/softnic: not in enabled drivers build config 00:01:12.218 net/tap: not in enabled drivers build config 00:01:12.218 net/thunderx: not in enabled drivers build config 00:01:12.218 net/txgbe: not in enabled drivers build config 00:01:12.218 net/vdev_netvsc: not in enabled drivers build config 00:01:12.218 net/vhost: not in enabled drivers build config 00:01:12.218 net/virtio: not in enabled drivers build config 00:01:12.218 net/vmxnet3: not in enabled drivers build config 00:01:12.218 raw/*: missing internal dependency, "rawdev" 00:01:12.218 crypto/armv8: not in enabled drivers build config 00:01:12.218 crypto/bcmfs: not in enabled drivers build config 00:01:12.219 crypto/caam_jr: not in enabled drivers build config 00:01:12.219 crypto/ccp: not in enabled drivers build config 00:01:12.219 crypto/cnxk: not in enabled drivers build config 00:01:12.219 crypto/dpaa_sec: not in enabled drivers build config 00:01:12.219 crypto/dpaa2_sec: not in enabled drivers build config 00:01:12.219 crypto/ipsec_mb: not in enabled drivers build config 00:01:12.219 crypto/mlx5: not in enabled drivers build config 00:01:12.219 crypto/mvsam: not in enabled drivers build config 00:01:12.219 crypto/nitrox: not in enabled drivers build config 00:01:12.219 crypto/null: not in enabled drivers build config 00:01:12.219 crypto/octeontx: not in enabled drivers build config 00:01:12.219 crypto/openssl: not in enabled drivers build config 00:01:12.219 crypto/scheduler: not in enabled drivers build config 00:01:12.219 crypto/uadk: not in enabled drivers build config 00:01:12.219 crypto/virtio: not in enabled drivers build config 00:01:12.219 compress/isal: not in enabled drivers build config 00:01:12.219 compress/mlx5: not in enabled drivers build config 00:01:12.219 compress/nitrox: not in enabled drivers build config 00:01:12.219 compress/octeontx: not in enabled drivers build config 00:01:12.219 compress/zlib: not in enabled drivers build config 00:01:12.219 regex/*: missing internal dependency, "regexdev" 00:01:12.219 ml/*: missing internal dependency, "mldev" 00:01:12.219 vdpa/ifc: not in enabled drivers build config 00:01:12.219 vdpa/mlx5: not in enabled drivers build config 00:01:12.219 vdpa/nfp: not in enabled drivers build config 00:01:12.219 vdpa/sfc: not in enabled drivers build config 00:01:12.219 event/*: missing internal dependency, "eventdev" 00:01:12.219 baseband/*: missing internal dependency, "bbdev" 00:01:12.219 gpu/*: missing internal dependency, "gpudev" 00:01:12.219 00:01:12.219 00:01:12.219 Build targets in project: 85 00:01:12.219 00:01:12.219 DPDK 24.03.0 00:01:12.219 00:01:12.219 User defined options 00:01:12.219 buildtype : debug 00:01:12.219 default_library : shared 00:01:12.219 libdir : lib 00:01:12.219 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:12.219 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:12.219 c_link_args : 00:01:12.219 cpu_instruction_set: native 00:01:12.219 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:12.219 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:12.219 enable_docs : false 00:01:12.219 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:12.219 enable_kmods : false 00:01:12.219 max_lcores : 128 00:01:12.219 tests : false 00:01:12.219 00:01:12.219 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:12.219 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:12.219 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:12.219 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:12.219 [3/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:12.219 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:12.219 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:12.483 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:12.483 [7/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:12.483 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:12.483 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:12.483 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:12.483 [11/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:12.483 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:12.483 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:12.483 [14/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:12.483 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:12.483 [16/268] Linking static target lib/librte_kvargs.a 00:01:12.483 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:12.483 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:12.483 [19/268] Linking static target lib/librte_log.a 00:01:12.483 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:12.483 [21/268] Linking static target lib/librte_pci.a 00:01:12.483 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:12.483 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:12.483 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:12.749 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:12.749 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:12.749 [27/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:12.749 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:12.749 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:12.749 [30/268] Linking static target lib/librte_meter.a 00:01:12.749 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:12.749 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:12.749 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:12.749 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:12.749 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:12.749 [36/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:12.749 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:12.749 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:12.749 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:12.749 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:12.749 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:12.749 [42/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:12.749 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:12.749 [44/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:12.749 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:12.749 [46/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:12.749 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:12.749 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:12.749 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:12.749 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:12.749 [51/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:12.749 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:12.749 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:12.749 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:12.749 [55/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:12.749 [56/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:12.749 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:12.749 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:12.749 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:12.749 [60/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:12.749 [61/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:12.749 [62/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:12.749 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:12.749 [64/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:12.749 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:12.749 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:12.749 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:12.749 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:12.749 [69/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:12.749 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:12.749 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:12.749 [72/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:13.009 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:13.009 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:13.009 [75/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:13.009 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:13.009 [77/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:13.009 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:13.009 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:13.009 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:13.009 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:13.009 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:13.009 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:13.009 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:13.009 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:13.009 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:13.009 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:13.009 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:13.009 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:13.009 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:13.009 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:13.009 [92/268] Linking static target lib/librte_ring.a 00:01:13.009 [93/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:13.009 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:13.009 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:13.009 [96/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.009 [97/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:13.009 [98/268] Linking static target lib/librte_telemetry.a 00:01:13.010 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:13.010 [100/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:13.010 [101/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:13.010 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:13.010 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:13.010 [104/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.010 [105/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:13.010 [106/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:13.010 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:13.010 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:13.010 [109/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:13.010 [110/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:13.010 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:13.010 [112/268] Linking static target lib/librte_mempool.a 00:01:13.010 [113/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:13.010 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:13.010 [115/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:13.010 [116/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:13.010 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:13.010 [118/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:13.010 [119/268] Linking static target lib/librte_rcu.a 00:01:13.010 [120/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:13.010 [121/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:13.010 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:13.010 [123/268] Linking static target lib/librte_net.a 00:01:13.010 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:13.010 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:13.010 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:13.010 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:13.010 [128/268] Linking static target lib/librte_eal.a 00:01:13.010 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:13.010 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:13.010 [131/268] Linking static target lib/librte_cmdline.a 00:01:13.010 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:13.010 [133/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.010 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:13.010 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:13.270 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:13.270 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.270 [138/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.270 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:13.270 [140/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:13.270 [141/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:13.270 [142/268] Linking target lib/librte_log.so.24.1 00:01:13.270 [143/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:13.270 [144/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:13.270 [145/268] Linking static target lib/librte_mbuf.a 00:01:13.270 [146/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:13.270 [147/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:13.270 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:13.270 [149/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.270 [150/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:13.270 [151/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:13.270 [152/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:13.270 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:13.270 [154/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.270 [155/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.270 [156/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:13.270 [157/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:13.270 [158/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:13.270 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:13.270 [160/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:13.270 [161/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:13.270 [162/268] Linking static target lib/librte_timer.a 00:01:13.270 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:13.270 [164/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:13.270 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:13.270 [166/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:13.270 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:13.530 [168/268] Linking target lib/librte_telemetry.so.24.1 00:01:13.530 [169/268] Linking target lib/librte_kvargs.so.24.1 00:01:13.530 [170/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:13.530 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:13.530 [172/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:13.530 [173/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:13.530 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:13.530 [175/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:13.530 [176/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:13.530 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:13.530 [178/268] Linking static target lib/librte_security.a 00:01:13.530 [179/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:13.530 [180/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:13.530 [181/268] Linking static target lib/librte_dmadev.a 00:01:13.530 [182/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:13.530 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:13.530 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:13.530 [185/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:13.530 [186/268] Linking static target lib/librte_compressdev.a 00:01:13.530 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:13.530 [188/268] Linking static target lib/librte_power.a 00:01:13.530 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:13.530 [190/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:13.530 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:13.530 [192/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:13.530 [193/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:13.530 [194/268] Linking static target lib/librte_hash.a 00:01:13.530 [195/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:13.530 [196/268] Linking static target lib/librte_reorder.a 00:01:13.530 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:13.530 [198/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.530 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:13.789 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:13.789 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:13.789 [202/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:13.790 [203/268] Linking static target drivers/librte_bus_vdev.a 00:01:13.790 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:13.790 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:13.790 [206/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:13.790 [207/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:13.790 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:13.790 [209/268] Linking static target drivers/librte_mempool_ring.a 00:01:13.790 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:13.790 [211/268] Linking static target drivers/librte_bus_pci.a 00:01:13.790 [212/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.790 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:13.790 [214/268] Linking static target lib/librte_cryptodev.a 00:01:14.049 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.049 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.049 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.049 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:14.049 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.049 [220/268] Linking static target lib/librte_ethdev.a 00:01:14.049 [221/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.049 [222/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.049 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.307 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:14.307 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.307 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.565 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.499 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:15.499 [229/268] Linking static target lib/librte_vhost.a 00:01:15.758 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.131 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.402 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.662 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.662 [234/268] Linking target lib/librte_eal.so.24.1 00:01:22.921 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:22.921 [236/268] Linking target lib/librte_dmadev.so.24.1 00:01:22.921 [237/268] Linking target lib/librte_timer.so.24.1 00:01:22.921 [238/268] Linking target lib/librte_ring.so.24.1 00:01:22.921 [239/268] Linking target lib/librte_meter.so.24.1 00:01:22.921 [240/268] Linking target lib/librte_pci.so.24.1 00:01:22.921 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:22.921 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:22.921 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:22.921 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:22.921 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:22.921 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:22.921 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:22.921 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:22.921 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:23.180 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:23.180 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:23.180 [252/268] Linking target lib/librte_mbuf.so.24.1 00:01:23.180 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:23.438 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:23.438 [255/268] Linking target lib/librte_compressdev.so.24.1 00:01:23.438 [256/268] Linking target lib/librte_net.so.24.1 00:01:23.438 [257/268] Linking target lib/librte_reorder.so.24.1 00:01:23.438 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:23.438 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:23.438 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:23.438 [261/268] Linking target lib/librte_security.so.24.1 00:01:23.438 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:23.438 [263/268] Linking target lib/librte_hash.so.24.1 00:01:23.438 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:23.696 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:23.696 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:23.696 [267/268] Linking target lib/librte_vhost.so.24.1 00:01:23.696 [268/268] Linking target lib/librte_power.so.24.1 00:01:23.696 INFO: autodetecting backend as ninja 00:01:23.696 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:35.890 CC lib/log/log.o 00:01:35.890 CC lib/log/log_flags.o 00:01:35.890 CC lib/log/log_deprecated.o 00:01:35.890 CC lib/ut/ut.o 00:01:35.890 CC lib/ut_mock/mock.o 00:01:35.890 LIB libspdk_ut.a 00:01:35.890 LIB libspdk_log.a 00:01:35.890 LIB libspdk_ut_mock.a 00:01:35.890 SO libspdk_ut.so.2.0 00:01:35.890 SO libspdk_log.so.7.1 00:01:35.890 SO libspdk_ut_mock.so.6.0 00:01:35.890 SYMLINK libspdk_ut.so 00:01:35.890 SYMLINK libspdk_log.so 00:01:35.890 SYMLINK libspdk_ut_mock.so 00:01:35.890 CC lib/dma/dma.o 00:01:35.890 CC lib/util/base64.o 00:01:35.890 CC lib/util/bit_array.o 00:01:35.890 CC lib/util/cpuset.o 00:01:35.890 CC lib/util/crc32.o 00:01:35.890 CC lib/util/crc16.o 00:01:35.890 CC lib/util/crc32c.o 00:01:35.890 CC lib/util/crc64.o 00:01:35.890 CC lib/util/crc32_ieee.o 00:01:35.890 CC lib/ioat/ioat.o 00:01:35.890 CC lib/util/dif.o 00:01:35.890 CC lib/util/fd.o 00:01:35.890 CXX lib/trace_parser/trace.o 00:01:35.890 CC lib/util/fd_group.o 00:01:35.890 CC lib/util/file.o 00:01:35.890 CC lib/util/hexlify.o 00:01:35.890 CC lib/util/iov.o 00:01:35.890 CC lib/util/math.o 00:01:35.890 CC lib/util/net.o 00:01:35.890 CC lib/util/pipe.o 00:01:35.890 CC lib/util/strerror_tls.o 00:01:35.890 CC lib/util/string.o 00:01:35.890 CC lib/util/uuid.o 00:01:35.890 CC lib/util/xor.o 00:01:35.890 CC lib/util/zipf.o 00:01:35.890 CC lib/util/md5.o 00:01:35.890 CC lib/vfio_user/host/vfio_user_pci.o 00:01:35.890 CC lib/vfio_user/host/vfio_user.o 00:01:35.890 LIB libspdk_dma.a 00:01:35.890 SO libspdk_dma.so.5.0 00:01:35.890 LIB libspdk_ioat.a 00:01:35.890 SYMLINK libspdk_dma.so 00:01:35.890 SO libspdk_ioat.so.7.0 00:01:35.890 SYMLINK libspdk_ioat.so 00:01:35.890 LIB libspdk_vfio_user.a 00:01:35.890 SO libspdk_vfio_user.so.5.0 00:01:36.150 SYMLINK libspdk_vfio_user.so 00:01:36.150 LIB libspdk_util.a 00:01:36.150 SO libspdk_util.so.10.1 00:01:36.150 SYMLINK libspdk_util.so 00:01:36.150 LIB libspdk_trace_parser.a 00:01:36.409 SO libspdk_trace_parser.so.6.0 00:01:36.409 SYMLINK libspdk_trace_parser.so 00:01:36.409 CC lib/env_dpdk/env.o 00:01:36.409 CC lib/env_dpdk/memory.o 00:01:36.409 CC lib/json/json_parse.o 00:01:36.409 CC lib/env_dpdk/pci.o 00:01:36.409 CC lib/rdma_utils/rdma_utils.o 00:01:36.409 CC lib/env_dpdk/init.o 00:01:36.409 CC lib/env_dpdk/threads.o 00:01:36.409 CC lib/json/json_util.o 00:01:36.409 CC lib/json/json_write.o 00:01:36.409 CC lib/env_dpdk/pci_ioat.o 00:01:36.409 CC lib/env_dpdk/pci_virtio.o 00:01:36.409 CC lib/env_dpdk/pci_vmd.o 00:01:36.409 CC lib/env_dpdk/pci_idxd.o 00:01:36.409 CC lib/env_dpdk/pci_event.o 00:01:36.409 CC lib/conf/conf.o 00:01:36.669 CC lib/env_dpdk/sigbus_handler.o 00:01:36.669 CC lib/env_dpdk/pci_dpdk.o 00:01:36.669 CC lib/vmd/vmd.o 00:01:36.669 CC lib/vmd/led.o 00:01:36.669 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:36.669 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:36.669 CC lib/idxd/idxd.o 00:01:36.669 CC lib/idxd/idxd_user.o 00:01:36.669 CC lib/idxd/idxd_kernel.o 00:01:36.669 LIB libspdk_conf.a 00:01:36.928 SO libspdk_conf.so.6.0 00:01:36.928 LIB libspdk_rdma_utils.a 00:01:36.928 LIB libspdk_json.a 00:01:36.928 SO libspdk_rdma_utils.so.1.0 00:01:36.928 SYMLINK libspdk_conf.so 00:01:36.928 SO libspdk_json.so.6.0 00:01:36.928 SYMLINK libspdk_rdma_utils.so 00:01:36.928 SYMLINK libspdk_json.so 00:01:36.928 LIB libspdk_idxd.a 00:01:36.928 LIB libspdk_vmd.a 00:01:36.928 SO libspdk_idxd.so.12.1 00:01:37.185 SO libspdk_vmd.so.6.0 00:01:37.185 SYMLINK libspdk_idxd.so 00:01:37.185 SYMLINK libspdk_vmd.so 00:01:37.185 CC lib/rdma_provider/common.o 00:01:37.185 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:37.185 CC lib/jsonrpc/jsonrpc_server.o 00:01:37.185 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:37.185 CC lib/jsonrpc/jsonrpc_client.o 00:01:37.185 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:37.443 LIB libspdk_rdma_provider.a 00:01:37.443 SO libspdk_rdma_provider.so.7.0 00:01:37.443 LIB libspdk_jsonrpc.a 00:01:37.443 SO libspdk_jsonrpc.so.6.0 00:01:37.443 SYMLINK libspdk_rdma_provider.so 00:01:37.443 SYMLINK libspdk_jsonrpc.so 00:01:37.701 LIB libspdk_env_dpdk.a 00:01:37.701 SO libspdk_env_dpdk.so.15.1 00:01:37.701 SYMLINK libspdk_env_dpdk.so 00:01:37.701 CC lib/rpc/rpc.o 00:01:37.961 LIB libspdk_rpc.a 00:01:37.961 SO libspdk_rpc.so.6.0 00:01:37.961 SYMLINK libspdk_rpc.so 00:01:38.221 CC lib/keyring/keyring.o 00:01:38.221 CC lib/keyring/keyring_rpc.o 00:01:38.480 CC lib/notify/notify.o 00:01:38.480 CC lib/notify/notify_rpc.o 00:01:38.480 CC lib/trace/trace.o 00:01:38.480 CC lib/trace/trace_flags.o 00:01:38.480 CC lib/trace/trace_rpc.o 00:01:38.480 LIB libspdk_notify.a 00:01:38.480 LIB libspdk_keyring.a 00:01:38.480 SO libspdk_notify.so.6.0 00:01:38.480 SO libspdk_keyring.so.2.0 00:01:38.480 SYMLINK libspdk_keyring.so 00:01:38.480 LIB libspdk_trace.a 00:01:38.480 SYMLINK libspdk_notify.so 00:01:38.739 SO libspdk_trace.so.11.0 00:01:38.739 SYMLINK libspdk_trace.so 00:01:38.998 CC lib/thread/thread.o 00:01:38.998 CC lib/thread/iobuf.o 00:01:38.998 CC lib/sock/sock.o 00:01:38.998 CC lib/sock/sock_rpc.o 00:01:39.259 LIB libspdk_sock.a 00:01:39.259 SO libspdk_sock.so.10.0 00:01:39.518 SYMLINK libspdk_sock.so 00:01:39.776 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:39.776 CC lib/nvme/nvme_ctrlr.o 00:01:39.776 CC lib/nvme/nvme_ns_cmd.o 00:01:39.776 CC lib/nvme/nvme_fabric.o 00:01:39.776 CC lib/nvme/nvme_ns.o 00:01:39.776 CC lib/nvme/nvme_pcie_common.o 00:01:39.776 CC lib/nvme/nvme.o 00:01:39.776 CC lib/nvme/nvme_pcie.o 00:01:39.776 CC lib/nvme/nvme_qpair.o 00:01:39.776 CC lib/nvme/nvme_quirks.o 00:01:39.776 CC lib/nvme/nvme_transport.o 00:01:39.776 CC lib/nvme/nvme_discovery.o 00:01:39.776 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:39.776 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:39.776 CC lib/nvme/nvme_tcp.o 00:01:39.776 CC lib/nvme/nvme_opal.o 00:01:39.776 CC lib/nvme/nvme_io_msg.o 00:01:39.776 CC lib/nvme/nvme_poll_group.o 00:01:39.776 CC lib/nvme/nvme_zns.o 00:01:39.776 CC lib/nvme/nvme_stubs.o 00:01:39.776 CC lib/nvme/nvme_auth.o 00:01:39.776 CC lib/nvme/nvme_cuse.o 00:01:39.776 CC lib/nvme/nvme_vfio_user.o 00:01:39.776 CC lib/nvme/nvme_rdma.o 00:01:40.033 LIB libspdk_thread.a 00:01:40.033 SO libspdk_thread.so.11.0 00:01:40.034 SYMLINK libspdk_thread.so 00:01:40.291 CC lib/accel/accel.o 00:01:40.291 CC lib/accel/accel_sw.o 00:01:40.291 CC lib/accel/accel_rpc.o 00:01:40.291 CC lib/virtio/virtio_vhost_user.o 00:01:40.291 CC lib/virtio/virtio.o 00:01:40.291 CC lib/virtio/virtio_vfio_user.o 00:01:40.291 CC lib/fsdev/fsdev.o 00:01:40.291 CC lib/virtio/virtio_pci.o 00:01:40.291 CC lib/blob/blobstore.o 00:01:40.291 CC lib/blob/request.o 00:01:40.291 CC lib/fsdev/fsdev_io.o 00:01:40.291 CC lib/blob/blob_bs_dev.o 00:01:40.291 CC lib/fsdev/fsdev_rpc.o 00:01:40.291 CC lib/blob/zeroes.o 00:01:40.291 CC lib/init/json_config.o 00:01:40.291 CC lib/init/subsystem.o 00:01:40.291 CC lib/init/subsystem_rpc.o 00:01:40.291 CC lib/init/rpc.o 00:01:40.291 CC lib/vfu_tgt/tgt_rpc.o 00:01:40.291 CC lib/vfu_tgt/tgt_endpoint.o 00:01:40.549 LIB libspdk_init.a 00:01:40.549 SO libspdk_init.so.6.0 00:01:40.807 LIB libspdk_virtio.a 00:01:40.807 LIB libspdk_vfu_tgt.a 00:01:40.807 SO libspdk_virtio.so.7.0 00:01:40.807 SO libspdk_vfu_tgt.so.3.0 00:01:40.807 SYMLINK libspdk_init.so 00:01:40.807 SYMLINK libspdk_virtio.so 00:01:40.807 SYMLINK libspdk_vfu_tgt.so 00:01:40.807 LIB libspdk_fsdev.a 00:01:41.065 SO libspdk_fsdev.so.2.0 00:01:41.065 CC lib/event/app.o 00:01:41.065 CC lib/event/reactor.o 00:01:41.065 CC lib/event/app_rpc.o 00:01:41.065 CC lib/event/log_rpc.o 00:01:41.065 SYMLINK libspdk_fsdev.so 00:01:41.065 CC lib/event/scheduler_static.o 00:01:41.324 LIB libspdk_accel.a 00:01:41.324 SO libspdk_accel.so.16.0 00:01:41.324 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:01:41.324 SYMLINK libspdk_accel.so 00:01:41.324 LIB libspdk_event.a 00:01:41.324 SO libspdk_event.so.14.0 00:01:41.324 LIB libspdk_nvme.a 00:01:41.582 SYMLINK libspdk_event.so 00:01:41.582 SO libspdk_nvme.so.15.0 00:01:41.582 CC lib/bdev/bdev.o 00:01:41.582 CC lib/bdev/bdev_rpc.o 00:01:41.582 CC lib/bdev/bdev_zone.o 00:01:41.582 CC lib/bdev/part.o 00:01:41.582 CC lib/bdev/scsi_nvme.o 00:01:41.582 SYMLINK libspdk_nvme.so 00:01:41.855 LIB libspdk_fuse_dispatcher.a 00:01:41.855 SO libspdk_fuse_dispatcher.so.1.0 00:01:41.855 SYMLINK libspdk_fuse_dispatcher.so 00:01:42.422 LIB libspdk_blob.a 00:01:42.681 SO libspdk_blob.so.12.0 00:01:42.681 SYMLINK libspdk_blob.so 00:01:42.940 CC lib/blobfs/blobfs.o 00:01:42.940 CC lib/blobfs/tree.o 00:01:42.940 CC lib/lvol/lvol.o 00:01:43.507 LIB libspdk_bdev.a 00:01:43.507 SO libspdk_bdev.so.17.0 00:01:43.507 LIB libspdk_blobfs.a 00:01:43.507 SO libspdk_blobfs.so.11.0 00:01:43.507 SYMLINK libspdk_bdev.so 00:01:43.767 LIB libspdk_lvol.a 00:01:43.767 SYMLINK libspdk_blobfs.so 00:01:43.767 SO libspdk_lvol.so.11.0 00:01:43.767 SYMLINK libspdk_lvol.so 00:01:43.767 CC lib/nvmf/ctrlr.o 00:01:43.767 CC lib/nvmf/ctrlr_bdev.o 00:01:43.767 CC lib/nvmf/ctrlr_discovery.o 00:01:43.767 CC lib/nvmf/subsystem.o 00:01:43.767 CC lib/nvmf/nvmf.o 00:01:43.767 CC lib/ublk/ublk_rpc.o 00:01:43.767 CC lib/ublk/ublk.o 00:01:43.767 CC lib/nvmf/nvmf_rpc.o 00:01:43.767 CC lib/nvmf/tcp.o 00:01:43.767 CC lib/nvmf/transport.o 00:01:43.767 CC lib/nvmf/stubs.o 00:01:43.767 CC lib/nvmf/mdns_server.o 00:01:43.767 CC lib/scsi/dev.o 00:01:43.767 CC lib/scsi/lun.o 00:01:43.767 CC lib/scsi/scsi.o 00:01:43.767 CC lib/nvmf/vfio_user.o 00:01:43.767 CC lib/scsi/port.o 00:01:43.767 CC lib/nvmf/rdma.o 00:01:43.767 CC lib/nvmf/auth.o 00:01:43.767 CC lib/scsi/scsi_bdev.o 00:01:43.767 CC lib/scsi/scsi_pr.o 00:01:43.767 CC lib/scsi/scsi_rpc.o 00:01:43.767 CC lib/scsi/task.o 00:01:43.767 CC lib/nbd/nbd.o 00:01:43.767 CC lib/nbd/nbd_rpc.o 00:01:44.027 CC lib/ftl/ftl_core.o 00:01:44.027 CC lib/ftl/ftl_init.o 00:01:44.027 CC lib/ftl/ftl_layout.o 00:01:44.027 CC lib/ftl/ftl_sb.o 00:01:44.027 CC lib/ftl/ftl_debug.o 00:01:44.027 CC lib/ftl/ftl_io.o 00:01:44.027 CC lib/ftl/ftl_l2p.o 00:01:44.027 CC lib/ftl/ftl_l2p_flat.o 00:01:44.027 CC lib/ftl/ftl_nv_cache.o 00:01:44.027 CC lib/ftl/ftl_band_ops.o 00:01:44.027 CC lib/ftl/ftl_band.o 00:01:44.027 CC lib/ftl/ftl_writer.o 00:01:44.027 CC lib/ftl/ftl_rq.o 00:01:44.027 CC lib/ftl/ftl_reloc.o 00:01:44.027 CC lib/ftl/ftl_l2p_cache.o 00:01:44.027 CC lib/ftl/ftl_p2l.o 00:01:44.027 CC lib/ftl/ftl_p2l_log.o 00:01:44.027 CC lib/ftl/mngt/ftl_mngt.o 00:01:44.027 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:44.027 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:44.027 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:44.027 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:44.027 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:44.027 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:44.027 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:44.027 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:44.027 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:44.027 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:44.027 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:44.027 CC lib/ftl/utils/ftl_conf.o 00:01:44.027 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:44.027 CC lib/ftl/utils/ftl_bitmap.o 00:01:44.027 CC lib/ftl/utils/ftl_mempool.o 00:01:44.027 CC lib/ftl/utils/ftl_md.o 00:01:44.027 CC lib/ftl/utils/ftl_property.o 00:01:44.027 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:44.027 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:44.027 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:44.027 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:44.027 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:44.027 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:44.027 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:44.027 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:44.027 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:44.027 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:44.027 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:01:44.027 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:01:44.027 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:44.027 CC lib/ftl/base/ftl_base_dev.o 00:01:44.027 CC lib/ftl/ftl_trace.o 00:01:44.027 CC lib/ftl/base/ftl_base_bdev.o 00:01:44.594 LIB libspdk_scsi.a 00:01:44.594 SO libspdk_scsi.so.9.0 00:01:44.594 LIB libspdk_nbd.a 00:01:44.594 SO libspdk_nbd.so.7.0 00:01:44.594 SYMLINK libspdk_scsi.so 00:01:44.594 SYMLINK libspdk_nbd.so 00:01:44.594 LIB libspdk_ublk.a 00:01:44.852 SO libspdk_ublk.so.3.0 00:01:44.852 SYMLINK libspdk_ublk.so 00:01:44.852 CC lib/vhost/vhost_rpc.o 00:01:44.852 CC lib/vhost/vhost.o 00:01:44.852 CC lib/vhost/vhost_blk.o 00:01:44.852 CC lib/vhost/vhost_scsi.o 00:01:44.852 CC lib/vhost/rte_vhost_user.o 00:01:44.852 CC lib/iscsi/conn.o 00:01:44.852 CC lib/iscsi/init_grp.o 00:01:44.852 CC lib/iscsi/iscsi.o 00:01:44.852 CC lib/iscsi/param.o 00:01:44.852 CC lib/iscsi/portal_grp.o 00:01:44.852 CC lib/iscsi/iscsi_subsystem.o 00:01:44.852 CC lib/iscsi/tgt_node.o 00:01:44.852 CC lib/iscsi/task.o 00:01:44.852 CC lib/iscsi/iscsi_rpc.o 00:01:44.852 LIB libspdk_ftl.a 00:01:45.110 SO libspdk_ftl.so.9.0 00:01:45.110 SYMLINK libspdk_ftl.so 00:01:45.675 LIB libspdk_nvmf.a 00:01:45.675 LIB libspdk_vhost.a 00:01:45.675 SO libspdk_nvmf.so.20.0 00:01:45.675 SO libspdk_vhost.so.8.0 00:01:45.675 SYMLINK libspdk_vhost.so 00:01:45.934 SYMLINK libspdk_nvmf.so 00:01:45.934 LIB libspdk_iscsi.a 00:01:45.934 SO libspdk_iscsi.so.8.0 00:01:45.934 SYMLINK libspdk_iscsi.so 00:01:46.502 CC module/vfu_device/vfu_virtio.o 00:01:46.502 CC module/vfu_device/vfu_virtio_scsi.o 00:01:46.502 CC module/vfu_device/vfu_virtio_blk.o 00:01:46.502 CC module/vfu_device/vfu_virtio_rpc.o 00:01:46.502 CC module/vfu_device/vfu_virtio_fs.o 00:01:46.502 CC module/env_dpdk/env_dpdk_rpc.o 00:01:46.502 CC module/accel/dsa/accel_dsa.o 00:01:46.502 CC module/accel/dsa/accel_dsa_rpc.o 00:01:46.502 CC module/accel/error/accel_error.o 00:01:46.502 CC module/sock/posix/posix.o 00:01:46.502 CC module/accel/error/accel_error_rpc.o 00:01:46.759 CC module/accel/ioat/accel_ioat.o 00:01:46.759 CC module/keyring/file/keyring_rpc.o 00:01:46.759 CC module/keyring/file/keyring.o 00:01:46.759 CC module/accel/iaa/accel_iaa.o 00:01:46.759 LIB libspdk_env_dpdk_rpc.a 00:01:46.759 CC module/accel/ioat/accel_ioat_rpc.o 00:01:46.759 CC module/accel/iaa/accel_iaa_rpc.o 00:01:46.759 CC module/scheduler/gscheduler/gscheduler.o 00:01:46.759 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:46.759 CC module/blob/bdev/blob_bdev.o 00:01:46.759 CC module/keyring/linux/keyring.o 00:01:46.759 CC module/keyring/linux/keyring_rpc.o 00:01:46.759 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:46.759 CC module/fsdev/aio/fsdev_aio.o 00:01:46.759 CC module/fsdev/aio/fsdev_aio_rpc.o 00:01:46.759 CC module/fsdev/aio/linux_aio_mgr.o 00:01:46.759 SO libspdk_env_dpdk_rpc.so.6.0 00:01:46.759 SYMLINK libspdk_env_dpdk_rpc.so 00:01:46.759 LIB libspdk_scheduler_gscheduler.a 00:01:46.759 LIB libspdk_keyring_file.a 00:01:46.759 LIB libspdk_keyring_linux.a 00:01:46.759 SO libspdk_scheduler_gscheduler.so.4.0 00:01:46.759 LIB libspdk_accel_error.a 00:01:46.759 LIB libspdk_accel_ioat.a 00:01:46.759 LIB libspdk_scheduler_dynamic.a 00:01:46.759 LIB libspdk_scheduler_dpdk_governor.a 00:01:46.759 SO libspdk_accel_error.so.2.0 00:01:46.759 SO libspdk_keyring_linux.so.1.0 00:01:46.759 SO libspdk_keyring_file.so.2.0 00:01:46.759 SO libspdk_accel_ioat.so.6.0 00:01:46.759 LIB libspdk_accel_iaa.a 00:01:46.759 LIB libspdk_accel_dsa.a 00:01:46.759 SO libspdk_scheduler_dynamic.so.4.0 00:01:46.759 SYMLINK libspdk_scheduler_gscheduler.so 00:01:46.759 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:47.016 SYMLINK libspdk_accel_error.so 00:01:47.016 SO libspdk_accel_iaa.so.3.0 00:01:47.016 SO libspdk_accel_dsa.so.5.0 00:01:47.016 SYMLINK libspdk_keyring_linux.so 00:01:47.016 LIB libspdk_blob_bdev.a 00:01:47.016 SYMLINK libspdk_accel_ioat.so 00:01:47.016 SYMLINK libspdk_keyring_file.so 00:01:47.016 SYMLINK libspdk_scheduler_dynamic.so 00:01:47.016 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:47.016 SO libspdk_blob_bdev.so.12.0 00:01:47.016 SYMLINK libspdk_accel_dsa.so 00:01:47.016 SYMLINK libspdk_accel_iaa.so 00:01:47.016 SYMLINK libspdk_blob_bdev.so 00:01:47.016 LIB libspdk_vfu_device.a 00:01:47.016 SO libspdk_vfu_device.so.3.0 00:01:47.016 SYMLINK libspdk_vfu_device.so 00:01:47.272 LIB libspdk_fsdev_aio.a 00:01:47.272 LIB libspdk_sock_posix.a 00:01:47.272 SO libspdk_fsdev_aio.so.1.0 00:01:47.272 SO libspdk_sock_posix.so.6.0 00:01:47.272 SYMLINK libspdk_fsdev_aio.so 00:01:47.272 SYMLINK libspdk_sock_posix.so 00:01:47.530 CC module/bdev/error/vbdev_error.o 00:01:47.530 CC module/bdev/error/vbdev_error_rpc.o 00:01:47.530 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:47.530 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:47.530 CC module/bdev/raid/bdev_raid.o 00:01:47.530 CC module/bdev/raid/bdev_raid_sb.o 00:01:47.530 CC module/bdev/raid/bdev_raid_rpc.o 00:01:47.530 CC module/blobfs/bdev/blobfs_bdev.o 00:01:47.530 CC module/bdev/raid/raid0.o 00:01:47.530 CC module/bdev/raid/raid1.o 00:01:47.530 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:47.530 CC module/bdev/raid/concat.o 00:01:47.530 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:47.530 CC module/bdev/iscsi/bdev_iscsi.o 00:01:47.530 CC module/bdev/gpt/gpt.o 00:01:47.530 CC module/bdev/gpt/vbdev_gpt.o 00:01:47.530 CC module/bdev/delay/vbdev_delay.o 00:01:47.530 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:47.530 CC module/bdev/split/vbdev_split_rpc.o 00:01:47.530 CC module/bdev/split/vbdev_split.o 00:01:47.530 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:47.530 CC module/bdev/passthru/vbdev_passthru.o 00:01:47.530 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:47.530 CC module/bdev/null/bdev_null_rpc.o 00:01:47.530 CC module/bdev/null/bdev_null.o 00:01:47.530 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:47.530 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:47.530 CC module/bdev/aio/bdev_aio.o 00:01:47.530 CC module/bdev/aio/bdev_aio_rpc.o 00:01:47.530 CC module/bdev/malloc/bdev_malloc.o 00:01:47.530 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:47.530 CC module/bdev/ftl/bdev_ftl.o 00:01:47.530 CC module/bdev/lvol/vbdev_lvol.o 00:01:47.530 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:47.530 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:47.530 CC module/bdev/nvme/bdev_nvme.o 00:01:47.530 CC module/bdev/nvme/nvme_rpc.o 00:01:47.530 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:47.530 CC module/bdev/nvme/vbdev_opal.o 00:01:47.530 CC module/bdev/nvme/bdev_mdns_client.o 00:01:47.530 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:47.530 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:47.789 LIB libspdk_blobfs_bdev.a 00:01:47.789 LIB libspdk_bdev_error.a 00:01:47.789 LIB libspdk_bdev_split.a 00:01:47.789 SO libspdk_blobfs_bdev.so.6.0 00:01:47.789 SO libspdk_bdev_error.so.6.0 00:01:47.789 LIB libspdk_bdev_gpt.a 00:01:47.789 SO libspdk_bdev_split.so.6.0 00:01:47.789 LIB libspdk_bdev_null.a 00:01:47.789 SO libspdk_bdev_gpt.so.6.0 00:01:47.789 SYMLINK libspdk_bdev_error.so 00:01:47.789 SYMLINK libspdk_blobfs_bdev.so 00:01:47.789 SO libspdk_bdev_null.so.6.0 00:01:47.789 SYMLINK libspdk_bdev_split.so 00:01:47.789 LIB libspdk_bdev_zone_block.a 00:01:47.789 LIB libspdk_bdev_passthru.a 00:01:47.789 SYMLINK libspdk_bdev_gpt.so 00:01:47.789 LIB libspdk_bdev_ftl.a 00:01:47.789 SO libspdk_bdev_zone_block.so.6.0 00:01:47.789 LIB libspdk_bdev_aio.a 00:01:47.789 SO libspdk_bdev_passthru.so.6.0 00:01:47.789 LIB libspdk_bdev_malloc.a 00:01:47.789 SYMLINK libspdk_bdev_null.so 00:01:47.789 LIB libspdk_bdev_iscsi.a 00:01:47.789 SO libspdk_bdev_ftl.so.6.0 00:01:47.789 LIB libspdk_bdev_delay.a 00:01:47.789 SO libspdk_bdev_aio.so.6.0 00:01:47.789 SO libspdk_bdev_malloc.so.6.0 00:01:47.789 SO libspdk_bdev_iscsi.so.6.0 00:01:47.789 SYMLINK libspdk_bdev_zone_block.so 00:01:47.789 SYMLINK libspdk_bdev_passthru.so 00:01:47.789 SO libspdk_bdev_delay.so.6.0 00:01:47.789 SYMLINK libspdk_bdev_aio.so 00:01:47.789 SYMLINK libspdk_bdev_ftl.so 00:01:48.048 SYMLINK libspdk_bdev_malloc.so 00:01:48.048 SYMLINK libspdk_bdev_iscsi.so 00:01:48.048 SYMLINK libspdk_bdev_delay.so 00:01:48.048 LIB libspdk_bdev_lvol.a 00:01:48.048 LIB libspdk_bdev_virtio.a 00:01:48.048 SO libspdk_bdev_lvol.so.6.0 00:01:48.048 SO libspdk_bdev_virtio.so.6.0 00:01:48.048 SYMLINK libspdk_bdev_lvol.so 00:01:48.048 SYMLINK libspdk_bdev_virtio.so 00:01:48.306 LIB libspdk_bdev_raid.a 00:01:48.306 SO libspdk_bdev_raid.so.6.0 00:01:48.306 SYMLINK libspdk_bdev_raid.so 00:01:49.299 LIB libspdk_bdev_nvme.a 00:01:49.299 SO libspdk_bdev_nvme.so.7.1 00:01:49.558 SYMLINK libspdk_bdev_nvme.so 00:01:50.127 CC module/event/subsystems/scheduler/scheduler.o 00:01:50.127 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:50.127 CC module/event/subsystems/vmd/vmd.o 00:01:50.127 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:50.127 CC module/event/subsystems/fsdev/fsdev.o 00:01:50.127 CC module/event/subsystems/keyring/keyring.o 00:01:50.127 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:50.127 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:50.127 CC module/event/subsystems/iobuf/iobuf.o 00:01:50.127 CC module/event/subsystems/sock/sock.o 00:01:50.127 LIB libspdk_event_vhost_blk.a 00:01:50.127 LIB libspdk_event_scheduler.a 00:01:50.127 LIB libspdk_event_keyring.a 00:01:50.127 LIB libspdk_event_vmd.a 00:01:50.127 LIB libspdk_event_sock.a 00:01:50.127 SO libspdk_event_vhost_blk.so.3.0 00:01:50.127 SO libspdk_event_scheduler.so.4.0 00:01:50.127 LIB libspdk_event_vfu_tgt.a 00:01:50.127 LIB libspdk_event_fsdev.a 00:01:50.127 LIB libspdk_event_iobuf.a 00:01:50.127 SO libspdk_event_vmd.so.6.0 00:01:50.127 SO libspdk_event_sock.so.5.0 00:01:50.127 SO libspdk_event_keyring.so.1.0 00:01:50.127 SO libspdk_event_vfu_tgt.so.3.0 00:01:50.387 SO libspdk_event_fsdev.so.1.0 00:01:50.387 SYMLINK libspdk_event_scheduler.so 00:01:50.387 SYMLINK libspdk_event_vhost_blk.so 00:01:50.387 SO libspdk_event_iobuf.so.3.0 00:01:50.387 SYMLINK libspdk_event_sock.so 00:01:50.387 SYMLINK libspdk_event_keyring.so 00:01:50.387 SYMLINK libspdk_event_vfu_tgt.so 00:01:50.387 SYMLINK libspdk_event_vmd.so 00:01:50.387 SYMLINK libspdk_event_fsdev.so 00:01:50.387 SYMLINK libspdk_event_iobuf.so 00:01:50.646 CC module/event/subsystems/accel/accel.o 00:01:50.908 LIB libspdk_event_accel.a 00:01:50.908 SO libspdk_event_accel.so.6.0 00:01:50.908 SYMLINK libspdk_event_accel.so 00:01:51.166 CC module/event/subsystems/bdev/bdev.o 00:01:51.424 LIB libspdk_event_bdev.a 00:01:51.424 SO libspdk_event_bdev.so.6.0 00:01:51.424 SYMLINK libspdk_event_bdev.so 00:01:51.682 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:51.682 CC module/event/subsystems/nbd/nbd.o 00:01:51.682 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:51.682 CC module/event/subsystems/scsi/scsi.o 00:01:51.682 CC module/event/subsystems/ublk/ublk.o 00:01:51.940 LIB libspdk_event_nbd.a 00:01:51.940 LIB libspdk_event_ublk.a 00:01:51.940 LIB libspdk_event_scsi.a 00:01:51.940 SO libspdk_event_nbd.so.6.0 00:01:51.940 LIB libspdk_event_nvmf.a 00:01:51.940 SO libspdk_event_ublk.so.3.0 00:01:51.940 SO libspdk_event_scsi.so.6.0 00:01:51.941 SO libspdk_event_nvmf.so.6.0 00:01:51.941 SYMLINK libspdk_event_nbd.so 00:01:51.941 SYMLINK libspdk_event_ublk.so 00:01:51.941 SYMLINK libspdk_event_scsi.so 00:01:51.941 SYMLINK libspdk_event_nvmf.so 00:01:52.199 CC module/event/subsystems/iscsi/iscsi.o 00:01:52.199 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:52.458 LIB libspdk_event_iscsi.a 00:01:52.458 LIB libspdk_event_vhost_scsi.a 00:01:52.458 SO libspdk_event_iscsi.so.6.0 00:01:52.458 SO libspdk_event_vhost_scsi.so.3.0 00:01:52.458 SYMLINK libspdk_event_iscsi.so 00:01:52.458 SYMLINK libspdk_event_vhost_scsi.so 00:01:52.717 SO libspdk.so.6.0 00:01:52.717 SYMLINK libspdk.so 00:01:52.975 CC app/spdk_lspci/spdk_lspci.o 00:01:52.975 CC test/rpc_client/rpc_client_test.o 00:01:52.975 CC app/spdk_nvme_perf/perf.o 00:01:52.975 TEST_HEADER include/spdk/accel.h 00:01:52.975 TEST_HEADER include/spdk/assert.h 00:01:52.975 TEST_HEADER include/spdk/accel_module.h 00:01:52.975 CXX app/trace/trace.o 00:01:52.975 TEST_HEADER include/spdk/barrier.h 00:01:52.975 CC app/spdk_top/spdk_top.o 00:01:52.975 CC app/spdk_nvme_discover/discovery_aer.o 00:01:52.975 CC app/trace_record/trace_record.o 00:01:52.975 TEST_HEADER include/spdk/bdev.h 00:01:52.975 TEST_HEADER include/spdk/base64.h 00:01:52.975 TEST_HEADER include/spdk/bdev_module.h 00:01:52.975 CC app/spdk_nvme_identify/identify.o 00:01:52.975 TEST_HEADER include/spdk/bdev_zone.h 00:01:52.975 TEST_HEADER include/spdk/bit_array.h 00:01:52.976 TEST_HEADER include/spdk/bit_pool.h 00:01:52.976 TEST_HEADER include/spdk/blob_bdev.h 00:01:52.976 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:52.976 TEST_HEADER include/spdk/blobfs.h 00:01:52.976 TEST_HEADER include/spdk/conf.h 00:01:52.976 TEST_HEADER include/spdk/blob.h 00:01:52.976 TEST_HEADER include/spdk/cpuset.h 00:01:52.976 TEST_HEADER include/spdk/config.h 00:01:52.976 TEST_HEADER include/spdk/crc32.h 00:01:52.976 TEST_HEADER include/spdk/crc16.h 00:01:52.976 TEST_HEADER include/spdk/crc64.h 00:01:52.976 TEST_HEADER include/spdk/dif.h 00:01:52.976 TEST_HEADER include/spdk/dma.h 00:01:52.976 TEST_HEADER include/spdk/endian.h 00:01:52.976 TEST_HEADER include/spdk/env.h 00:01:52.976 TEST_HEADER include/spdk/env_dpdk.h 00:01:52.976 TEST_HEADER include/spdk/event.h 00:01:52.976 TEST_HEADER include/spdk/fd_group.h 00:01:52.976 TEST_HEADER include/spdk/fd.h 00:01:52.976 TEST_HEADER include/spdk/file.h 00:01:52.976 TEST_HEADER include/spdk/fsdev.h 00:01:52.976 TEST_HEADER include/spdk/fsdev_module.h 00:01:52.976 TEST_HEADER include/spdk/ftl.h 00:01:52.976 TEST_HEADER include/spdk/gpt_spec.h 00:01:52.976 TEST_HEADER include/spdk/fuse_dispatcher.h 00:01:52.976 TEST_HEADER include/spdk/hexlify.h 00:01:52.976 TEST_HEADER include/spdk/idxd.h 00:01:52.976 TEST_HEADER include/spdk/histogram_data.h 00:01:52.976 TEST_HEADER include/spdk/idxd_spec.h 00:01:52.976 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:52.976 TEST_HEADER include/spdk/ioat.h 00:01:52.976 TEST_HEADER include/spdk/ioat_spec.h 00:01:52.976 TEST_HEADER include/spdk/init.h 00:01:52.976 TEST_HEADER include/spdk/iscsi_spec.h 00:01:52.976 TEST_HEADER include/spdk/json.h 00:01:52.976 TEST_HEADER include/spdk/jsonrpc.h 00:01:52.976 TEST_HEADER include/spdk/keyring.h 00:01:52.976 TEST_HEADER include/spdk/likely.h 00:01:52.976 TEST_HEADER include/spdk/log.h 00:01:52.976 TEST_HEADER include/spdk/keyring_module.h 00:01:52.976 TEST_HEADER include/spdk/md5.h 00:01:52.976 TEST_HEADER include/spdk/lvol.h 00:01:52.976 TEST_HEADER include/spdk/memory.h 00:01:52.976 TEST_HEADER include/spdk/mmio.h 00:01:52.976 TEST_HEADER include/spdk/nbd.h 00:01:52.976 TEST_HEADER include/spdk/net.h 00:01:52.976 TEST_HEADER include/spdk/nvme.h 00:01:52.976 TEST_HEADER include/spdk/notify.h 00:01:52.976 TEST_HEADER include/spdk/nvme_intel.h 00:01:52.976 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:52.976 CC app/nvmf_tgt/nvmf_main.o 00:01:52.976 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:52.976 TEST_HEADER include/spdk/nvme_zns.h 00:01:52.976 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:52.976 TEST_HEADER include/spdk/nvme_spec.h 00:01:52.976 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:52.976 TEST_HEADER include/spdk/nvmf.h 00:01:52.976 TEST_HEADER include/spdk/nvmf_spec.h 00:01:52.976 TEST_HEADER include/spdk/opal.h 00:01:52.976 TEST_HEADER include/spdk/opal_spec.h 00:01:52.976 TEST_HEADER include/spdk/pci_ids.h 00:01:52.976 TEST_HEADER include/spdk/nvmf_transport.h 00:01:52.976 TEST_HEADER include/spdk/pipe.h 00:01:52.976 TEST_HEADER include/spdk/reduce.h 00:01:52.976 TEST_HEADER include/spdk/rpc.h 00:01:52.976 TEST_HEADER include/spdk/scheduler.h 00:01:52.976 TEST_HEADER include/spdk/queue.h 00:01:52.976 TEST_HEADER include/spdk/scsi.h 00:01:52.976 TEST_HEADER include/spdk/sock.h 00:01:52.976 TEST_HEADER include/spdk/stdinc.h 00:01:52.976 TEST_HEADER include/spdk/scsi_spec.h 00:01:52.976 TEST_HEADER include/spdk/string.h 00:01:52.976 TEST_HEADER include/spdk/trace.h 00:01:52.976 TEST_HEADER include/spdk/trace_parser.h 00:01:52.976 TEST_HEADER include/spdk/thread.h 00:01:52.976 TEST_HEADER include/spdk/tree.h 00:01:52.976 TEST_HEADER include/spdk/util.h 00:01:52.976 TEST_HEADER include/spdk/ublk.h 00:01:52.976 TEST_HEADER include/spdk/version.h 00:01:52.976 TEST_HEADER include/spdk/uuid.h 00:01:52.976 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:52.976 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:52.976 CC app/spdk_dd/spdk_dd.o 00:01:52.976 TEST_HEADER include/spdk/vmd.h 00:01:52.976 TEST_HEADER include/spdk/xor.h 00:01:52.976 TEST_HEADER include/spdk/zipf.h 00:01:52.976 TEST_HEADER include/spdk/vhost.h 00:01:53.248 CXX test/cpp_headers/accel.o 00:01:53.248 CXX test/cpp_headers/accel_module.o 00:01:53.248 CXX test/cpp_headers/assert.o 00:01:53.248 CXX test/cpp_headers/base64.o 00:01:53.248 CXX test/cpp_headers/barrier.o 00:01:53.248 CXX test/cpp_headers/bdev.o 00:01:53.248 CXX test/cpp_headers/bdev_module.o 00:01:53.248 CC app/iscsi_tgt/iscsi_tgt.o 00:01:53.248 CXX test/cpp_headers/bit_array.o 00:01:53.248 CXX test/cpp_headers/bdev_zone.o 00:01:53.248 CXX test/cpp_headers/blobfs_bdev.o 00:01:53.248 CXX test/cpp_headers/bit_pool.o 00:01:53.248 CXX test/cpp_headers/blob_bdev.o 00:01:53.248 CXX test/cpp_headers/blobfs.o 00:01:53.248 CXX test/cpp_headers/blob.o 00:01:53.248 CXX test/cpp_headers/conf.o 00:01:53.248 CXX test/cpp_headers/cpuset.o 00:01:53.248 CXX test/cpp_headers/config.o 00:01:53.248 CC app/spdk_tgt/spdk_tgt.o 00:01:53.248 CXX test/cpp_headers/crc32.o 00:01:53.248 CXX test/cpp_headers/crc16.o 00:01:53.248 CXX test/cpp_headers/crc64.o 00:01:53.248 CXX test/cpp_headers/dif.o 00:01:53.248 CXX test/cpp_headers/dma.o 00:01:53.248 CXX test/cpp_headers/endian.o 00:01:53.248 CXX test/cpp_headers/fd_group.o 00:01:53.248 CXX test/cpp_headers/env.o 00:01:53.248 CXX test/cpp_headers/env_dpdk.o 00:01:53.248 CXX test/cpp_headers/event.o 00:01:53.248 CXX test/cpp_headers/file.o 00:01:53.248 CXX test/cpp_headers/fd.o 00:01:53.248 CXX test/cpp_headers/fsdev.o 00:01:53.248 CXX test/cpp_headers/fsdev_module.o 00:01:53.248 CXX test/cpp_headers/ftl.o 00:01:53.248 CXX test/cpp_headers/gpt_spec.o 00:01:53.248 CXX test/cpp_headers/fuse_dispatcher.o 00:01:53.248 CXX test/cpp_headers/hexlify.o 00:01:53.248 CXX test/cpp_headers/idxd.o 00:01:53.248 CXX test/cpp_headers/init.o 00:01:53.248 CXX test/cpp_headers/histogram_data.o 00:01:53.248 CXX test/cpp_headers/ioat_spec.o 00:01:53.248 CXX test/cpp_headers/idxd_spec.o 00:01:53.248 CXX test/cpp_headers/ioat.o 00:01:53.248 CXX test/cpp_headers/json.o 00:01:53.248 CC examples/util/zipf/zipf.o 00:01:53.248 CXX test/cpp_headers/iscsi_spec.o 00:01:53.248 CXX test/cpp_headers/keyring.o 00:01:53.248 CXX test/cpp_headers/jsonrpc.o 00:01:53.248 CXX test/cpp_headers/keyring_module.o 00:01:53.248 CXX test/cpp_headers/likely.o 00:01:53.248 CXX test/cpp_headers/log.o 00:01:53.248 CXX test/cpp_headers/md5.o 00:01:53.248 CXX test/cpp_headers/lvol.o 00:01:53.248 CXX test/cpp_headers/mmio.o 00:01:53.248 CXX test/cpp_headers/nbd.o 00:01:53.248 CXX test/cpp_headers/net.o 00:01:53.248 CXX test/cpp_headers/memory.o 00:01:53.248 CXX test/cpp_headers/notify.o 00:01:53.248 CXX test/cpp_headers/nvme_intel.o 00:01:53.248 CXX test/cpp_headers/nvme_ocssd.o 00:01:53.248 CXX test/cpp_headers/nvme.o 00:01:53.248 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:53.248 CXX test/cpp_headers/nvme_zns.o 00:01:53.248 CXX test/cpp_headers/nvmf_cmd.o 00:01:53.248 CXX test/cpp_headers/nvme_spec.o 00:01:53.248 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:53.248 CXX test/cpp_headers/nvmf.o 00:01:53.248 CXX test/cpp_headers/nvmf_spec.o 00:01:53.248 CXX test/cpp_headers/nvmf_transport.o 00:01:53.248 CC test/env/memory/memory_ut.o 00:01:53.248 CXX test/cpp_headers/opal.o 00:01:53.248 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:53.248 CC test/env/pci/pci_ut.o 00:01:53.248 CC test/env/vtophys/vtophys.o 00:01:53.248 CC test/app/stub/stub.o 00:01:53.248 CC test/app/jsoncat/jsoncat.o 00:01:53.248 LINK spdk_lspci 00:01:53.248 CC test/thread/poller_perf/poller_perf.o 00:01:53.248 CC examples/ioat/perf/perf.o 00:01:53.248 CC test/app/histogram_perf/histogram_perf.o 00:01:53.248 CC app/fio/nvme/fio_plugin.o 00:01:53.248 CC examples/ioat/verify/verify.o 00:01:53.248 CXX test/cpp_headers/opal_spec.o 00:01:53.248 CC test/dma/test_dma/test_dma.o 00:01:53.248 CC test/app/bdev_svc/bdev_svc.o 00:01:53.248 CC app/fio/bdev/fio_plugin.o 00:01:53.519 LINK interrupt_tgt 00:01:53.519 LINK nvmf_tgt 00:01:53.519 LINK spdk_trace_record 00:01:53.519 LINK rpc_client_test 00:01:53.519 CC test/env/mem_callbacks/mem_callbacks.o 00:01:53.519 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:53.779 LINK vtophys 00:01:53.779 LINK poller_perf 00:01:53.779 CXX test/cpp_headers/pci_ids.o 00:01:53.779 LINK jsoncat 00:01:53.779 CXX test/cpp_headers/pipe.o 00:01:53.779 CXX test/cpp_headers/queue.o 00:01:53.779 CXX test/cpp_headers/reduce.o 00:01:53.779 CXX test/cpp_headers/rpc.o 00:01:53.779 CXX test/cpp_headers/scheduler.o 00:01:53.779 CXX test/cpp_headers/scsi.o 00:01:53.779 LINK iscsi_tgt 00:01:53.779 LINK spdk_tgt 00:01:53.779 CXX test/cpp_headers/scsi_spec.o 00:01:53.779 CXX test/cpp_headers/sock.o 00:01:53.779 CXX test/cpp_headers/stdinc.o 00:01:53.779 CXX test/cpp_headers/thread.o 00:01:53.779 LINK spdk_nvme_discover 00:01:53.779 CXX test/cpp_headers/string.o 00:01:53.779 CXX test/cpp_headers/trace.o 00:01:53.779 CXX test/cpp_headers/tree.o 00:01:53.779 CXX test/cpp_headers/ublk.o 00:01:53.779 CXX test/cpp_headers/trace_parser.o 00:01:53.779 CXX test/cpp_headers/util.o 00:01:53.779 CXX test/cpp_headers/uuid.o 00:01:53.779 CXX test/cpp_headers/version.o 00:01:53.779 LINK zipf 00:01:53.779 CXX test/cpp_headers/vfio_user_pci.o 00:01:53.779 CXX test/cpp_headers/vhost.o 00:01:53.779 CXX test/cpp_headers/vmd.o 00:01:53.779 CXX test/cpp_headers/vfio_user_spec.o 00:01:53.779 CXX test/cpp_headers/xor.o 00:01:53.779 CXX test/cpp_headers/zipf.o 00:01:53.779 LINK histogram_perf 00:01:53.779 LINK bdev_svc 00:01:53.779 LINK verify 00:01:53.779 LINK env_dpdk_post_init 00:01:53.779 LINK stub 00:01:53.779 LINK spdk_dd 00:01:53.779 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:53.779 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:54.039 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:54.039 LINK spdk_trace 00:01:54.039 LINK ioat_perf 00:01:54.039 LINK pci_ut 00:01:54.297 LINK test_dma 00:01:54.297 CC test/event/event_perf/event_perf.o 00:01:54.297 CC test/event/reactor_perf/reactor_perf.o 00:01:54.297 LINK spdk_nvme 00:01:54.297 CC test/event/reactor/reactor.o 00:01:54.297 CC test/event/app_repeat/app_repeat.o 00:01:54.297 LINK nvme_fuzz 00:01:54.297 CC examples/vmd/lsvmd/lsvmd.o 00:01:54.297 CC app/vhost/vhost.o 00:01:54.297 CC test/event/scheduler/scheduler.o 00:01:54.297 LINK spdk_nvme_identify 00:01:54.297 CC examples/vmd/led/led.o 00:01:54.297 CC examples/idxd/perf/perf.o 00:01:54.297 CC examples/sock/hello_world/hello_sock.o 00:01:54.297 LINK spdk_bdev 00:01:54.297 LINK spdk_top 00:01:54.297 LINK spdk_nvme_perf 00:01:54.297 CC examples/thread/thread/thread_ex.o 00:01:54.297 LINK reactor_perf 00:01:54.297 LINK event_perf 00:01:54.297 LINK reactor 00:01:54.297 LINK vhost_fuzz 00:01:54.297 LINK app_repeat 00:01:54.297 LINK lsvmd 00:01:54.555 LINK led 00:01:54.555 LINK vhost 00:01:54.555 LINK mem_callbacks 00:01:54.555 LINK scheduler 00:01:54.555 LINK hello_sock 00:01:54.555 LINK memory_ut 00:01:54.555 LINK idxd_perf 00:01:54.555 LINK thread 00:01:54.555 CC test/nvme/err_injection/err_injection.o 00:01:54.555 CC test/nvme/simple_copy/simple_copy.o 00:01:54.555 CC test/nvme/e2edp/nvme_dp.o 00:01:54.555 CC test/nvme/connect_stress/connect_stress.o 00:01:54.555 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:54.555 CC test/nvme/reset/reset.o 00:01:54.555 CC test/nvme/startup/startup.o 00:01:54.555 CC test/nvme/overhead/overhead.o 00:01:54.815 CC test/nvme/aer/aer.o 00:01:54.815 CC test/nvme/compliance/nvme_compliance.o 00:01:54.815 CC test/nvme/fused_ordering/fused_ordering.o 00:01:54.815 CC test/nvme/boot_partition/boot_partition.o 00:01:54.815 CC test/nvme/reserve/reserve.o 00:01:54.815 CC test/nvme/cuse/cuse.o 00:01:54.815 CC test/nvme/fdp/fdp.o 00:01:54.815 CC test/accel/dif/dif.o 00:01:54.815 CC test/nvme/sgl/sgl.o 00:01:54.815 CC test/blobfs/mkfs/mkfs.o 00:01:54.815 CC test/lvol/esnap/esnap.o 00:01:54.815 LINK err_injection 00:01:54.815 LINK connect_stress 00:01:54.815 LINK boot_partition 00:01:54.815 LINK doorbell_aers 00:01:54.815 LINK startup 00:01:54.815 LINK reserve 00:01:54.815 LINK fused_ordering 00:01:54.815 LINK simple_copy 00:01:54.815 LINK reset 00:01:54.815 LINK nvme_dp 00:01:55.074 LINK mkfs 00:01:55.074 LINK sgl 00:01:55.074 LINK overhead 00:01:55.074 LINK aer 00:01:55.074 CC examples/nvme/hello_world/hello_world.o 00:01:55.074 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:55.074 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:55.074 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:55.074 CC examples/nvme/hotplug/hotplug.o 00:01:55.074 CC examples/nvme/abort/abort.o 00:01:55.074 CC examples/nvme/reconnect/reconnect.o 00:01:55.074 LINK nvme_compliance 00:01:55.074 CC examples/nvme/arbitration/arbitration.o 00:01:55.074 LINK fdp 00:01:55.074 CC examples/accel/perf/accel_perf.o 00:01:55.074 CC examples/blob/hello_world/hello_blob.o 00:01:55.074 CC examples/blob/cli/blobcli.o 00:01:55.074 CC examples/fsdev/hello_world/hello_fsdev.o 00:01:55.074 LINK cmb_copy 00:01:55.074 LINK hello_world 00:01:55.074 LINK pmr_persistence 00:01:55.332 LINK hotplug 00:01:55.332 LINK arbitration 00:01:55.332 LINK dif 00:01:55.332 LINK reconnect 00:01:55.332 LINK abort 00:01:55.332 LINK iscsi_fuzz 00:01:55.332 LINK hello_blob 00:01:55.332 LINK nvme_manage 00:01:55.332 LINK hello_fsdev 00:01:55.590 LINK accel_perf 00:01:55.590 LINK blobcli 00:01:55.849 LINK cuse 00:01:55.849 CC test/bdev/bdevio/bdevio.o 00:01:55.849 CC examples/bdev/hello_world/hello_bdev.o 00:01:55.849 CC examples/bdev/bdevperf/bdevperf.o 00:01:56.108 LINK bdevio 00:01:56.108 LINK hello_bdev 00:01:56.676 LINK bdevperf 00:01:57.242 CC examples/nvmf/nvmf/nvmf.o 00:01:57.242 LINK nvmf 00:01:58.621 LINK esnap 00:01:58.621 00:01:58.621 real 0m54.882s 00:01:58.621 user 7m58.508s 00:01:58.621 sys 3m29.056s 00:01:58.621 04:56:35 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:58.621 04:56:35 make -- common/autotest_common.sh@10 -- $ set +x 00:01:58.621 ************************************ 00:01:58.621 END TEST make 00:01:58.621 ************************************ 00:01:58.621 04:56:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:58.621 04:56:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:58.621 04:56:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:58.621 04:56:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.621 04:56:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:58.621 04:56:35 -- pm/common@44 -- $ pid=3305517 00:01:58.621 04:56:35 -- pm/common@50 -- $ kill -TERM 3305517 00:01:58.621 04:56:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.621 04:56:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:58.621 04:56:35 -- pm/common@44 -- $ pid=3305518 00:01:58.621 04:56:35 -- pm/common@50 -- $ kill -TERM 3305518 00:01:58.621 04:56:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.621 04:56:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:58.621 04:56:35 -- pm/common@44 -- $ pid=3305520 00:01:58.621 04:56:35 -- pm/common@50 -- $ kill -TERM 3305520 00:01:58.621 04:56:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.621 04:56:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:58.621 04:56:35 -- pm/common@44 -- $ pid=3305543 00:01:58.621 04:56:35 -- pm/common@50 -- $ sudo -E kill -TERM 3305543 00:01:58.879 04:56:35 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:01:58.879 04:56:35 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:58.879 04:56:35 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:01:58.879 04:56:35 -- common/autotest_common.sh@1693 -- # lcov --version 00:01:58.879 04:56:35 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:01:58.879 04:56:35 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:01:58.879 04:56:35 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:01:58.879 04:56:35 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:01:58.879 04:56:35 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:01:58.879 04:56:35 -- scripts/common.sh@336 -- # IFS=.-: 00:01:58.879 04:56:35 -- scripts/common.sh@336 -- # read -ra ver1 00:01:58.879 04:56:35 -- scripts/common.sh@337 -- # IFS=.-: 00:01:58.879 04:56:35 -- scripts/common.sh@337 -- # read -ra ver2 00:01:58.879 04:56:35 -- scripts/common.sh@338 -- # local 'op=<' 00:01:58.879 04:56:35 -- scripts/common.sh@340 -- # ver1_l=2 00:01:58.879 04:56:35 -- scripts/common.sh@341 -- # ver2_l=1 00:01:58.879 04:56:35 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:01:58.879 04:56:35 -- scripts/common.sh@344 -- # case "$op" in 00:01:58.879 04:56:35 -- scripts/common.sh@345 -- # : 1 00:01:58.879 04:56:35 -- scripts/common.sh@364 -- # (( v = 0 )) 00:01:58.879 04:56:35 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:58.879 04:56:35 -- scripts/common.sh@365 -- # decimal 1 00:01:58.879 04:56:35 -- scripts/common.sh@353 -- # local d=1 00:01:58.879 04:56:35 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:01:58.879 04:56:35 -- scripts/common.sh@355 -- # echo 1 00:01:58.879 04:56:35 -- scripts/common.sh@365 -- # ver1[v]=1 00:01:58.879 04:56:35 -- scripts/common.sh@366 -- # decimal 2 00:01:58.879 04:56:35 -- scripts/common.sh@353 -- # local d=2 00:01:58.879 04:56:35 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:01:58.879 04:56:35 -- scripts/common.sh@355 -- # echo 2 00:01:58.879 04:56:35 -- scripts/common.sh@366 -- # ver2[v]=2 00:01:58.879 04:56:35 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:01:58.879 04:56:35 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:01:58.879 04:56:35 -- scripts/common.sh@368 -- # return 0 00:01:58.879 04:56:35 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:01:58.879 04:56:35 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:01:58.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:01:58.879 --rc genhtml_branch_coverage=1 00:01:58.879 --rc genhtml_function_coverage=1 00:01:58.879 --rc genhtml_legend=1 00:01:58.879 --rc geninfo_all_blocks=1 00:01:58.879 --rc geninfo_unexecuted_blocks=1 00:01:58.879 00:01:58.879 ' 00:01:58.879 04:56:35 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:01:58.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:01:58.879 --rc genhtml_branch_coverage=1 00:01:58.879 --rc genhtml_function_coverage=1 00:01:58.879 --rc genhtml_legend=1 00:01:58.879 --rc geninfo_all_blocks=1 00:01:58.879 --rc geninfo_unexecuted_blocks=1 00:01:58.879 00:01:58.879 ' 00:01:58.879 04:56:35 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:01:58.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:01:58.879 --rc genhtml_branch_coverage=1 00:01:58.879 --rc genhtml_function_coverage=1 00:01:58.879 --rc genhtml_legend=1 00:01:58.879 --rc geninfo_all_blocks=1 00:01:58.879 --rc geninfo_unexecuted_blocks=1 00:01:58.879 00:01:58.879 ' 00:01:58.879 04:56:35 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:01:58.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:01:58.879 --rc genhtml_branch_coverage=1 00:01:58.879 --rc genhtml_function_coverage=1 00:01:58.879 --rc genhtml_legend=1 00:01:58.879 --rc geninfo_all_blocks=1 00:01:58.879 --rc geninfo_unexecuted_blocks=1 00:01:58.879 00:01:58.879 ' 00:01:58.879 04:56:35 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:58.879 04:56:35 -- nvmf/common.sh@7 -- # uname -s 00:01:58.879 04:56:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:58.879 04:56:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:58.879 04:56:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:58.879 04:56:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:58.879 04:56:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:58.879 04:56:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:58.879 04:56:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:58.879 04:56:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:58.879 04:56:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:58.879 04:56:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:58.879 04:56:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:01:58.879 04:56:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:01:58.879 04:56:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:58.879 04:56:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:58.879 04:56:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:58.879 04:56:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:58.879 04:56:35 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:58.879 04:56:35 -- scripts/common.sh@15 -- # shopt -s extglob 00:01:58.879 04:56:35 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:58.879 04:56:35 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:58.879 04:56:35 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:58.879 04:56:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.879 04:56:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.880 04:56:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.880 04:56:35 -- paths/export.sh@5 -- # export PATH 00:01:58.880 04:56:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.880 04:56:35 -- nvmf/common.sh@51 -- # : 0 00:01:58.880 04:56:35 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:01:58.880 04:56:35 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:01:58.880 04:56:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:58.880 04:56:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:58.880 04:56:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:58.880 04:56:35 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:01:58.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:01:58.880 04:56:35 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:01:58.880 04:56:35 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:01:58.880 04:56:35 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:01:58.880 04:56:35 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:58.880 04:56:35 -- spdk/autotest.sh@32 -- # uname -s 00:01:58.880 04:56:35 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:58.880 04:56:35 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:58.880 04:56:35 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:58.880 04:56:35 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:58.880 04:56:35 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:58.880 04:56:35 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:58.880 04:56:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:58.880 04:56:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:58.880 04:56:35 -- spdk/autotest.sh@48 -- # udevadm_pid=3367754 00:01:58.880 04:56:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:58.880 04:56:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:58.880 04:56:35 -- pm/common@17 -- # local monitor 00:01:58.880 04:56:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.880 04:56:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.880 04:56:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.880 04:56:35 -- pm/common@21 -- # date +%s 00:01:58.880 04:56:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.880 04:56:35 -- pm/common@21 -- # date +%s 00:01:58.880 04:56:35 -- pm/common@25 -- # sleep 1 00:01:58.880 04:56:35 -- pm/common@21 -- # date +%s 00:01:58.880 04:56:35 -- pm/common@21 -- # date +%s 00:01:58.880 04:56:35 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733716595 00:01:58.880 04:56:35 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733716595 00:01:58.880 04:56:35 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733716595 00:01:58.880 04:56:35 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733716595 00:01:59.138 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733716595_collect-cpu-load.pm.log 00:01:59.138 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733716595_collect-cpu-temp.pm.log 00:01:59.138 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733716595_collect-vmstat.pm.log 00:01:59.138 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733716595_collect-bmc-pm.bmc.pm.log 00:02:00.072 04:56:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:00.072 04:56:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:00.072 04:56:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:00.072 04:56:36 -- common/autotest_common.sh@10 -- # set +x 00:02:00.072 04:56:36 -- spdk/autotest.sh@59 -- # create_test_list 00:02:00.072 04:56:36 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:00.072 04:56:36 -- common/autotest_common.sh@10 -- # set +x 00:02:00.072 04:56:36 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:00.072 04:56:36 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:00.072 04:56:36 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:00.072 04:56:36 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:00.072 04:56:36 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:00.072 04:56:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:00.072 04:56:36 -- common/autotest_common.sh@1457 -- # uname 00:02:00.072 04:56:36 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:00.072 04:56:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:00.072 04:56:36 -- common/autotest_common.sh@1477 -- # uname 00:02:00.072 04:56:36 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:00.072 04:56:36 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:00.072 04:56:36 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:00.072 lcov: LCOV version 1.15 00:02:00.072 04:56:36 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:12.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:12.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:27.308 04:57:01 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:27.308 04:57:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:27.308 04:57:01 -- common/autotest_common.sh@10 -- # set +x 00:02:27.308 04:57:01 -- spdk/autotest.sh@78 -- # rm -f 00:02:27.308 04:57:01 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:27.566 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:27.566 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:27.566 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:27.566 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:27.566 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:27.566 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:27.825 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:27.825 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:27.825 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:27.825 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:27.825 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:27.825 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:27.825 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:27.825 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:27.825 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:27.825 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:27.825 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:28.084 04:57:04 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:28.084 04:57:04 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:28.084 04:57:04 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:28.084 04:57:04 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:02:28.084 04:57:04 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:02:28.084 04:57:04 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:02:28.084 04:57:04 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:28.084 04:57:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:28.084 04:57:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:28.084 04:57:04 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:28.084 04:57:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:28.084 04:57:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:28.084 04:57:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:28.084 04:57:04 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:28.084 04:57:04 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:28.084 No valid GPT data, bailing 00:02:28.084 04:57:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:28.084 04:57:04 -- scripts/common.sh@394 -- # pt= 00:02:28.084 04:57:04 -- scripts/common.sh@395 -- # return 1 00:02:28.084 04:57:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:28.084 1+0 records in 00:02:28.084 1+0 records out 00:02:28.084 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00192851 s, 544 MB/s 00:02:28.084 04:57:04 -- spdk/autotest.sh@105 -- # sync 00:02:28.084 04:57:04 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:28.084 04:57:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:28.084 04:57:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:33.356 04:57:09 -- spdk/autotest.sh@111 -- # uname -s 00:02:33.356 04:57:09 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:33.356 04:57:09 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:02:33.356 04:57:09 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:35.892 Hugepages 00:02:35.892 node hugesize free / total 00:02:35.892 node0 1048576kB 0 / 0 00:02:35.892 node0 2048kB 0 / 0 00:02:35.892 node1 1048576kB 0 / 0 00:02:35.892 node1 2048kB 0 / 0 00:02:35.892 00:02:35.892 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:36.151 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:36.151 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:36.151 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:36.151 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:36.151 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:36.151 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:36.151 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:36.151 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:36.151 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:36.151 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:36.151 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:36.151 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:36.151 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:36.151 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:36.151 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:36.151 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:36.151 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:36.151 04:57:12 -- spdk/autotest.sh@117 -- # uname -s 00:02:36.151 04:57:12 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:02:36.151 04:57:12 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:02:36.151 04:57:12 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:38.692 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:38.692 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:38.692 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:38.692 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:38.692 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:38.692 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:38.692 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:38.692 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:38.692 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:38.952 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:38.952 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:38.952 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:38.952 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:38.952 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:38.952 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:38.952 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:39.889 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:39.889 04:57:16 -- common/autotest_common.sh@1517 -- # sleep 1 00:02:40.826 04:57:17 -- common/autotest_common.sh@1518 -- # bdfs=() 00:02:40.826 04:57:17 -- common/autotest_common.sh@1518 -- # local bdfs 00:02:40.826 04:57:17 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:02:40.826 04:57:17 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:02:40.826 04:57:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:02:40.826 04:57:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:02:40.826 04:57:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:02:40.826 04:57:17 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:02:40.826 04:57:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:02:40.826 04:57:17 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:02:40.826 04:57:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:02:40.826 04:57:17 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:43.361 Waiting for block devices as requested 00:02:43.361 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:02:43.620 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:02:43.620 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:02:43.620 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:02:43.879 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:02:43.879 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:02:43.879 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:02:43.879 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:02:44.139 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:02:44.139 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:02:44.139 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:02:44.399 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:02:44.399 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:02:44.399 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:02:44.658 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:02:44.658 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:02:44.658 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:02:44.658 04:57:21 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:02:44.658 04:57:21 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:02:44.658 04:57:21 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:02:44.658 04:57:21 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:02:44.658 04:57:21 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:02:44.658 04:57:21 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:02:44.658 04:57:21 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:02:44.918 04:57:21 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:02:44.918 04:57:21 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:02:44.918 04:57:21 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:02:44.918 04:57:21 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:02:44.918 04:57:21 -- common/autotest_common.sh@1531 -- # grep oacs 00:02:44.918 04:57:21 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:02:44.918 04:57:21 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:02:44.918 04:57:21 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:02:44.918 04:57:21 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:02:44.918 04:57:21 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:02:44.918 04:57:21 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:02:44.918 04:57:21 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:02:44.918 04:57:21 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:02:44.918 04:57:21 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:02:44.918 04:57:21 -- common/autotest_common.sh@1543 -- # continue 00:02:44.918 04:57:21 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:02:44.918 04:57:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:02:44.918 04:57:21 -- common/autotest_common.sh@10 -- # set +x 00:02:44.918 04:57:21 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:02:44.918 04:57:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:44.918 04:57:21 -- common/autotest_common.sh@10 -- # set +x 00:02:44.918 04:57:21 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:47.458 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:47.458 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:47.458 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:47.458 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:47.458 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:47.458 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:47.458 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:47.458 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:47.458 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:47.458 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:47.717 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:47.717 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:47.717 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:47.717 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:47.717 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:47.717 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:48.652 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:48.652 04:57:25 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:02:48.652 04:57:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:02:48.652 04:57:25 -- common/autotest_common.sh@10 -- # set +x 00:02:48.652 04:57:25 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:02:48.652 04:57:25 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:02:48.652 04:57:25 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:02:48.652 04:57:25 -- common/autotest_common.sh@1563 -- # bdfs=() 00:02:48.652 04:57:25 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:02:48.652 04:57:25 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:02:48.652 04:57:25 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:02:48.652 04:57:25 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:02:48.652 04:57:25 -- common/autotest_common.sh@1498 -- # bdfs=() 00:02:48.652 04:57:25 -- common/autotest_common.sh@1498 -- # local bdfs 00:02:48.652 04:57:25 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:02:48.652 04:57:25 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:02:48.652 04:57:25 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:02:48.652 04:57:25 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:02:48.652 04:57:25 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:02:48.652 04:57:25 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:02:48.652 04:57:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:02:48.652 04:57:25 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:02:48.652 04:57:25 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:02:48.652 04:57:25 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:02:48.652 04:57:25 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:02:48.652 04:57:25 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:02:48.652 04:57:25 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:02:48.652 04:57:25 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:02:48.652 04:57:25 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3382494 00:02:48.652 04:57:25 -- common/autotest_common.sh@1585 -- # waitforlisten 3382494 00:02:48.652 04:57:25 -- common/autotest_common.sh@835 -- # '[' -z 3382494 ']' 00:02:48.652 04:57:25 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:02:48.652 04:57:25 -- common/autotest_common.sh@840 -- # local max_retries=100 00:02:48.652 04:57:25 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:02:48.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:02:48.652 04:57:25 -- common/autotest_common.sh@844 -- # xtrace_disable 00:02:48.652 04:57:25 -- common/autotest_common.sh@10 -- # set +x 00:02:48.652 [2024-12-09 04:57:25.226198] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:02:48.652 [2024-12-09 04:57:25.226244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3382494 ] 00:02:48.652 [2024-12-09 04:57:25.291806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:48.910 [2024-12-09 04:57:25.335509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:02:48.910 04:57:25 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:02:48.910 04:57:25 -- common/autotest_common.sh@868 -- # return 0 00:02:48.910 04:57:25 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:02:48.910 04:57:25 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:02:48.910 04:57:25 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:02:52.195 nvme0n1 00:02:52.195 04:57:28 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:02:52.195 [2024-12-09 04:57:28.729064] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:02:52.195 request: 00:02:52.195 { 00:02:52.195 "nvme_ctrlr_name": "nvme0", 00:02:52.195 "password": "test", 00:02:52.195 "method": "bdev_nvme_opal_revert", 00:02:52.195 "req_id": 1 00:02:52.195 } 00:02:52.195 Got JSON-RPC error response 00:02:52.195 response: 00:02:52.195 { 00:02:52.195 "code": -32602, 00:02:52.195 "message": "Invalid parameters" 00:02:52.195 } 00:02:52.195 04:57:28 -- common/autotest_common.sh@1591 -- # true 00:02:52.195 04:57:28 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:02:52.195 04:57:28 -- common/autotest_common.sh@1595 -- # killprocess 3382494 00:02:52.195 04:57:28 -- common/autotest_common.sh@954 -- # '[' -z 3382494 ']' 00:02:52.195 04:57:28 -- common/autotest_common.sh@958 -- # kill -0 3382494 00:02:52.195 04:57:28 -- common/autotest_common.sh@959 -- # uname 00:02:52.195 04:57:28 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:02:52.195 04:57:28 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3382494 00:02:52.195 04:57:28 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:02:52.195 04:57:28 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:02:52.195 04:57:28 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3382494' 00:02:52.195 killing process with pid 3382494 00:02:52.195 04:57:28 -- common/autotest_common.sh@973 -- # kill 3382494 00:02:52.195 04:57:28 -- common/autotest_common.sh@978 -- # wait 3382494 00:02:54.100 04:57:30 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:02:54.100 04:57:30 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:02:54.100 04:57:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:02:54.100 04:57:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:02:54.100 04:57:30 -- spdk/autotest.sh@149 -- # timing_enter lib 00:02:54.100 04:57:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:54.100 04:57:30 -- common/autotest_common.sh@10 -- # set +x 00:02:54.100 04:57:30 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:02:54.100 04:57:30 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:02:54.100 04:57:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:54.100 04:57:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:54.100 04:57:30 -- common/autotest_common.sh@10 -- # set +x 00:02:54.100 ************************************ 00:02:54.100 START TEST env 00:02:54.100 ************************************ 00:02:54.101 04:57:30 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:02:54.101 * Looking for test storage... 00:02:54.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:02:54.101 04:57:30 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:54.101 04:57:30 env -- common/autotest_common.sh@1693 -- # lcov --version 00:02:54.101 04:57:30 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:54.101 04:57:30 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:54.101 04:57:30 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:54.101 04:57:30 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:54.101 04:57:30 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:54.101 04:57:30 env -- scripts/common.sh@336 -- # IFS=.-: 00:02:54.101 04:57:30 env -- scripts/common.sh@336 -- # read -ra ver1 00:02:54.101 04:57:30 env -- scripts/common.sh@337 -- # IFS=.-: 00:02:54.101 04:57:30 env -- scripts/common.sh@337 -- # read -ra ver2 00:02:54.101 04:57:30 env -- scripts/common.sh@338 -- # local 'op=<' 00:02:54.101 04:57:30 env -- scripts/common.sh@340 -- # ver1_l=2 00:02:54.101 04:57:30 env -- scripts/common.sh@341 -- # ver2_l=1 00:02:54.101 04:57:30 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:54.101 04:57:30 env -- scripts/common.sh@344 -- # case "$op" in 00:02:54.101 04:57:30 env -- scripts/common.sh@345 -- # : 1 00:02:54.101 04:57:30 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:54.101 04:57:30 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:54.101 04:57:30 env -- scripts/common.sh@365 -- # decimal 1 00:02:54.101 04:57:30 env -- scripts/common.sh@353 -- # local d=1 00:02:54.101 04:57:30 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:54.101 04:57:30 env -- scripts/common.sh@355 -- # echo 1 00:02:54.101 04:57:30 env -- scripts/common.sh@365 -- # ver1[v]=1 00:02:54.101 04:57:30 env -- scripts/common.sh@366 -- # decimal 2 00:02:54.101 04:57:30 env -- scripts/common.sh@353 -- # local d=2 00:02:54.101 04:57:30 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:54.101 04:57:30 env -- scripts/common.sh@355 -- # echo 2 00:02:54.101 04:57:30 env -- scripts/common.sh@366 -- # ver2[v]=2 00:02:54.101 04:57:30 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:54.101 04:57:30 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:54.101 04:57:30 env -- scripts/common.sh@368 -- # return 0 00:02:54.101 04:57:30 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:54.101 04:57:30 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:54.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.101 --rc genhtml_branch_coverage=1 00:02:54.101 --rc genhtml_function_coverage=1 00:02:54.101 --rc genhtml_legend=1 00:02:54.101 --rc geninfo_all_blocks=1 00:02:54.101 --rc geninfo_unexecuted_blocks=1 00:02:54.101 00:02:54.101 ' 00:02:54.101 04:57:30 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:54.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.101 --rc genhtml_branch_coverage=1 00:02:54.101 --rc genhtml_function_coverage=1 00:02:54.101 --rc genhtml_legend=1 00:02:54.101 --rc geninfo_all_blocks=1 00:02:54.101 --rc geninfo_unexecuted_blocks=1 00:02:54.101 00:02:54.101 ' 00:02:54.101 04:57:30 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:54.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.101 --rc genhtml_branch_coverage=1 00:02:54.101 --rc genhtml_function_coverage=1 00:02:54.101 --rc genhtml_legend=1 00:02:54.101 --rc geninfo_all_blocks=1 00:02:54.101 --rc geninfo_unexecuted_blocks=1 00:02:54.101 00:02:54.101 ' 00:02:54.101 04:57:30 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:54.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.101 --rc genhtml_branch_coverage=1 00:02:54.101 --rc genhtml_function_coverage=1 00:02:54.101 --rc genhtml_legend=1 00:02:54.101 --rc geninfo_all_blocks=1 00:02:54.101 --rc geninfo_unexecuted_blocks=1 00:02:54.101 00:02:54.101 ' 00:02:54.101 04:57:30 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:02:54.101 04:57:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:54.101 04:57:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:54.101 04:57:30 env -- common/autotest_common.sh@10 -- # set +x 00:02:54.101 ************************************ 00:02:54.101 START TEST env_memory 00:02:54.101 ************************************ 00:02:54.101 04:57:30 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:02:54.101 00:02:54.101 00:02:54.101 CUnit - A unit testing framework for C - Version 2.1-3 00:02:54.101 http://cunit.sourceforge.net/ 00:02:54.101 00:02:54.101 00:02:54.101 Suite: memory 00:02:54.101 Test: alloc and free memory map ...[2024-12-09 04:57:30.668726] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 284:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:02:54.101 passed 00:02:54.101 Test: mem map translation ...[2024-12-09 04:57:30.688154] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:02:54.101 [2024-12-09 04:57:30.688170] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:02:54.101 [2024-12-09 04:57:30.688208] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:02:54.101 [2024-12-09 04:57:30.688215] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 606:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:02:54.101 passed 00:02:54.101 Test: mem map registration ...[2024-12-09 04:57:30.726577] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:02:54.101 [2024-12-09 04:57:30.726595] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:02:54.101 passed 00:02:54.361 Test: mem map adjacent registrations ...passed 00:02:54.361 00:02:54.361 Run Summary: Type Total Ran Passed Failed Inactive 00:02:54.361 suites 1 1 n/a 0 0 00:02:54.361 tests 4 4 4 0 0 00:02:54.361 asserts 152 152 152 0 n/a 00:02:54.361 00:02:54.361 Elapsed time = 0.132 seconds 00:02:54.361 00:02:54.361 real 0m0.139s 00:02:54.361 user 0m0.132s 00:02:54.361 sys 0m0.006s 00:02:54.361 04:57:30 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:54.361 04:57:30 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:02:54.361 ************************************ 00:02:54.361 END TEST env_memory 00:02:54.361 ************************************ 00:02:54.361 04:57:30 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:02:54.361 04:57:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:54.361 04:57:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:54.361 04:57:30 env -- common/autotest_common.sh@10 -- # set +x 00:02:54.361 ************************************ 00:02:54.361 START TEST env_vtophys 00:02:54.361 ************************************ 00:02:54.361 04:57:30 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:02:54.361 EAL: lib.eal log level changed from notice to debug 00:02:54.361 EAL: Detected lcore 0 as core 0 on socket 0 00:02:54.361 EAL: Detected lcore 1 as core 1 on socket 0 00:02:54.361 EAL: Detected lcore 2 as core 2 on socket 0 00:02:54.361 EAL: Detected lcore 3 as core 3 on socket 0 00:02:54.361 EAL: Detected lcore 4 as core 4 on socket 0 00:02:54.361 EAL: Detected lcore 5 as core 5 on socket 0 00:02:54.361 EAL: Detected lcore 6 as core 6 on socket 0 00:02:54.361 EAL: Detected lcore 7 as core 8 on socket 0 00:02:54.361 EAL: Detected lcore 8 as core 9 on socket 0 00:02:54.361 EAL: Detected lcore 9 as core 10 on socket 0 00:02:54.361 EAL: Detected lcore 10 as core 11 on socket 0 00:02:54.361 EAL: Detected lcore 11 as core 12 on socket 0 00:02:54.361 EAL: Detected lcore 12 as core 13 on socket 0 00:02:54.361 EAL: Detected lcore 13 as core 16 on socket 0 00:02:54.361 EAL: Detected lcore 14 as core 17 on socket 0 00:02:54.361 EAL: Detected lcore 15 as core 18 on socket 0 00:02:54.361 EAL: Detected lcore 16 as core 19 on socket 0 00:02:54.361 EAL: Detected lcore 17 as core 20 on socket 0 00:02:54.361 EAL: Detected lcore 18 as core 21 on socket 0 00:02:54.361 EAL: Detected lcore 19 as core 25 on socket 0 00:02:54.361 EAL: Detected lcore 20 as core 26 on socket 0 00:02:54.361 EAL: Detected lcore 21 as core 27 on socket 0 00:02:54.361 EAL: Detected lcore 22 as core 28 on socket 0 00:02:54.361 EAL: Detected lcore 23 as core 29 on socket 0 00:02:54.361 EAL: Detected lcore 24 as core 0 on socket 1 00:02:54.361 EAL: Detected lcore 25 as core 1 on socket 1 00:02:54.361 EAL: Detected lcore 26 as core 2 on socket 1 00:02:54.361 EAL: Detected lcore 27 as core 3 on socket 1 00:02:54.361 EAL: Detected lcore 28 as core 4 on socket 1 00:02:54.361 EAL: Detected lcore 29 as core 5 on socket 1 00:02:54.361 EAL: Detected lcore 30 as core 6 on socket 1 00:02:54.361 EAL: Detected lcore 31 as core 9 on socket 1 00:02:54.361 EAL: Detected lcore 32 as core 10 on socket 1 00:02:54.361 EAL: Detected lcore 33 as core 11 on socket 1 00:02:54.361 EAL: Detected lcore 34 as core 12 on socket 1 00:02:54.361 EAL: Detected lcore 35 as core 13 on socket 1 00:02:54.361 EAL: Detected lcore 36 as core 16 on socket 1 00:02:54.361 EAL: Detected lcore 37 as core 17 on socket 1 00:02:54.361 EAL: Detected lcore 38 as core 18 on socket 1 00:02:54.361 EAL: Detected lcore 39 as core 19 on socket 1 00:02:54.361 EAL: Detected lcore 40 as core 20 on socket 1 00:02:54.361 EAL: Detected lcore 41 as core 21 on socket 1 00:02:54.361 EAL: Detected lcore 42 as core 24 on socket 1 00:02:54.361 EAL: Detected lcore 43 as core 25 on socket 1 00:02:54.361 EAL: Detected lcore 44 as core 26 on socket 1 00:02:54.361 EAL: Detected lcore 45 as core 27 on socket 1 00:02:54.361 EAL: Detected lcore 46 as core 28 on socket 1 00:02:54.361 EAL: Detected lcore 47 as core 29 on socket 1 00:02:54.361 EAL: Detected lcore 48 as core 0 on socket 0 00:02:54.361 EAL: Detected lcore 49 as core 1 on socket 0 00:02:54.361 EAL: Detected lcore 50 as core 2 on socket 0 00:02:54.361 EAL: Detected lcore 51 as core 3 on socket 0 00:02:54.361 EAL: Detected lcore 52 as core 4 on socket 0 00:02:54.361 EAL: Detected lcore 53 as core 5 on socket 0 00:02:54.361 EAL: Detected lcore 54 as core 6 on socket 0 00:02:54.361 EAL: Detected lcore 55 as core 8 on socket 0 00:02:54.361 EAL: Detected lcore 56 as core 9 on socket 0 00:02:54.361 EAL: Detected lcore 57 as core 10 on socket 0 00:02:54.361 EAL: Detected lcore 58 as core 11 on socket 0 00:02:54.361 EAL: Detected lcore 59 as core 12 on socket 0 00:02:54.361 EAL: Detected lcore 60 as core 13 on socket 0 00:02:54.361 EAL: Detected lcore 61 as core 16 on socket 0 00:02:54.361 EAL: Detected lcore 62 as core 17 on socket 0 00:02:54.361 EAL: Detected lcore 63 as core 18 on socket 0 00:02:54.361 EAL: Detected lcore 64 as core 19 on socket 0 00:02:54.361 EAL: Detected lcore 65 as core 20 on socket 0 00:02:54.361 EAL: Detected lcore 66 as core 21 on socket 0 00:02:54.361 EAL: Detected lcore 67 as core 25 on socket 0 00:02:54.361 EAL: Detected lcore 68 as core 26 on socket 0 00:02:54.361 EAL: Detected lcore 69 as core 27 on socket 0 00:02:54.361 EAL: Detected lcore 70 as core 28 on socket 0 00:02:54.361 EAL: Detected lcore 71 as core 29 on socket 0 00:02:54.361 EAL: Detected lcore 72 as core 0 on socket 1 00:02:54.361 EAL: Detected lcore 73 as core 1 on socket 1 00:02:54.361 EAL: Detected lcore 74 as core 2 on socket 1 00:02:54.361 EAL: Detected lcore 75 as core 3 on socket 1 00:02:54.361 EAL: Detected lcore 76 as core 4 on socket 1 00:02:54.361 EAL: Detected lcore 77 as core 5 on socket 1 00:02:54.361 EAL: Detected lcore 78 as core 6 on socket 1 00:02:54.361 EAL: Detected lcore 79 as core 9 on socket 1 00:02:54.361 EAL: Detected lcore 80 as core 10 on socket 1 00:02:54.361 EAL: Detected lcore 81 as core 11 on socket 1 00:02:54.361 EAL: Detected lcore 82 as core 12 on socket 1 00:02:54.361 EAL: Detected lcore 83 as core 13 on socket 1 00:02:54.361 EAL: Detected lcore 84 as core 16 on socket 1 00:02:54.361 EAL: Detected lcore 85 as core 17 on socket 1 00:02:54.361 EAL: Detected lcore 86 as core 18 on socket 1 00:02:54.361 EAL: Detected lcore 87 as core 19 on socket 1 00:02:54.361 EAL: Detected lcore 88 as core 20 on socket 1 00:02:54.361 EAL: Detected lcore 89 as core 21 on socket 1 00:02:54.361 EAL: Detected lcore 90 as core 24 on socket 1 00:02:54.361 EAL: Detected lcore 91 as core 25 on socket 1 00:02:54.361 EAL: Detected lcore 92 as core 26 on socket 1 00:02:54.361 EAL: Detected lcore 93 as core 27 on socket 1 00:02:54.361 EAL: Detected lcore 94 as core 28 on socket 1 00:02:54.361 EAL: Detected lcore 95 as core 29 on socket 1 00:02:54.361 EAL: Maximum logical cores by configuration: 128 00:02:54.361 EAL: Detected CPU lcores: 96 00:02:54.361 EAL: Detected NUMA nodes: 2 00:02:54.361 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:02:54.361 EAL: Detected shared linkage of DPDK 00:02:54.361 EAL: No shared files mode enabled, IPC will be disabled 00:02:54.361 EAL: Bus pci wants IOVA as 'DC' 00:02:54.361 EAL: Buses did not request a specific IOVA mode. 00:02:54.361 EAL: IOMMU is available, selecting IOVA as VA mode. 00:02:54.361 EAL: Selected IOVA mode 'VA' 00:02:54.361 EAL: Probing VFIO support... 00:02:54.361 EAL: IOMMU type 1 (Type 1) is supported 00:02:54.361 EAL: IOMMU type 7 (sPAPR) is not supported 00:02:54.361 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:02:54.361 EAL: VFIO support initialized 00:02:54.361 EAL: Ask a virtual area of 0x2e000 bytes 00:02:54.361 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:02:54.361 EAL: Setting up physically contiguous memory... 00:02:54.361 EAL: Setting maximum number of open files to 524288 00:02:54.361 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:02:54.361 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:02:54.361 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:02:54.361 EAL: Ask a virtual area of 0x61000 bytes 00:02:54.362 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:02:54.362 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:02:54.362 EAL: Ask a virtual area of 0x400000000 bytes 00:02:54.362 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:02:54.362 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:02:54.362 EAL: Ask a virtual area of 0x61000 bytes 00:02:54.362 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:02:54.362 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:02:54.362 EAL: Ask a virtual area of 0x400000000 bytes 00:02:54.362 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:02:54.362 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:02:54.362 EAL: Ask a virtual area of 0x61000 bytes 00:02:54.362 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:02:54.362 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:02:54.362 EAL: Ask a virtual area of 0x400000000 bytes 00:02:54.362 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:02:54.362 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:02:54.362 EAL: Ask a virtual area of 0x61000 bytes 00:02:54.362 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:02:54.362 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:02:54.362 EAL: Ask a virtual area of 0x400000000 bytes 00:02:54.362 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:02:54.362 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:02:54.362 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:02:54.362 EAL: Ask a virtual area of 0x61000 bytes 00:02:54.362 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:02:54.362 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:02:54.362 EAL: Ask a virtual area of 0x400000000 bytes 00:02:54.362 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:02:54.362 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:02:54.362 EAL: Ask a virtual area of 0x61000 bytes 00:02:54.362 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:02:54.362 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:02:54.362 EAL: Ask a virtual area of 0x400000000 bytes 00:02:54.362 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:02:54.362 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:02:54.362 EAL: Ask a virtual area of 0x61000 bytes 00:02:54.362 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:02:54.362 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:02:54.362 EAL: Ask a virtual area of 0x400000000 bytes 00:02:54.362 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:02:54.362 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:02:54.362 EAL: Ask a virtual area of 0x61000 bytes 00:02:54.362 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:02:54.362 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:02:54.362 EAL: Ask a virtual area of 0x400000000 bytes 00:02:54.362 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:02:54.362 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:02:54.362 EAL: Hugepages will be freed exactly as allocated. 00:02:54.362 EAL: No shared files mode enabled, IPC is disabled 00:02:54.362 EAL: No shared files mode enabled, IPC is disabled 00:02:54.362 EAL: TSC frequency is ~2300000 KHz 00:02:54.362 EAL: Main lcore 0 is ready (tid=7f2ba9adda00;cpuset=[0]) 00:02:54.362 EAL: Trying to obtain current memory policy. 00:02:54.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:54.362 EAL: Restoring previous memory policy: 0 00:02:54.362 EAL: request: mp_malloc_sync 00:02:54.362 EAL: No shared files mode enabled, IPC is disabled 00:02:54.362 EAL: Heap on socket 0 was expanded by 2MB 00:02:54.362 EAL: No shared files mode enabled, IPC is disabled 00:02:54.362 EAL: No PCI address specified using 'addr=' in: bus=pci 00:02:54.362 EAL: Mem event callback 'spdk:(nil)' registered 00:02:54.362 00:02:54.362 00:02:54.362 CUnit - A unit testing framework for C - Version 2.1-3 00:02:54.362 http://cunit.sourceforge.net/ 00:02:54.362 00:02:54.362 00:02:54.362 Suite: components_suite 00:02:54.362 Test: vtophys_malloc_test ...passed 00:02:54.362 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:02:54.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:54.362 EAL: Restoring previous memory policy: 4 00:02:54.362 EAL: Calling mem event callback 'spdk:(nil)' 00:02:54.362 EAL: request: mp_malloc_sync 00:02:54.362 EAL: No shared files mode enabled, IPC is disabled 00:02:54.362 EAL: Heap on socket 0 was expanded by 4MB 00:02:54.362 EAL: Calling mem event callback 'spdk:(nil)' 00:02:54.362 EAL: request: mp_malloc_sync 00:02:54.362 EAL: No shared files mode enabled, IPC is disabled 00:02:54.362 EAL: Heap on socket 0 was shrunk by 4MB 00:02:54.362 EAL: Trying to obtain current memory policy. 00:02:54.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:54.362 EAL: Restoring previous memory policy: 4 00:02:54.362 EAL: Calling mem event callback 'spdk:(nil)' 00:02:54.362 EAL: request: mp_malloc_sync 00:02:54.362 EAL: No shared files mode enabled, IPC is disabled 00:02:54.362 EAL: Heap on socket 0 was expanded by 6MB 00:02:54.362 EAL: Calling mem event callback 'spdk:(nil)' 00:02:54.362 EAL: request: mp_malloc_sync 00:02:54.362 EAL: No shared files mode enabled, IPC is disabled 00:02:54.362 EAL: Heap on socket 0 was shrunk by 6MB 00:02:54.362 EAL: Trying to obtain current memory policy. 00:02:54.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:54.362 EAL: Restoring previous memory policy: 4 00:02:54.362 EAL: Calling mem event callback 'spdk:(nil)' 00:02:54.362 EAL: request: mp_malloc_sync 00:02:54.362 EAL: No shared files mode enabled, IPC is disabled 00:02:54.362 EAL: Heap on socket 0 was expanded by 10MB 00:02:54.362 EAL: Calling mem event callback 'spdk:(nil)' 00:02:54.362 EAL: request: mp_malloc_sync 00:02:54.362 EAL: No shared files mode enabled, IPC is disabled 00:02:54.362 EAL: Heap on socket 0 was shrunk by 10MB 00:02:54.362 EAL: Trying to obtain current memory policy. 00:02:54.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:54.362 EAL: Restoring previous memory policy: 4 00:02:54.362 EAL: Calling mem event callback 'spdk:(nil)' 00:02:54.362 EAL: request: mp_malloc_sync 00:02:54.362 EAL: No shared files mode enabled, IPC is disabled 00:02:54.362 EAL: Heap on socket 0 was expanded by 18MB 00:02:54.362 EAL: Calling mem event callback 'spdk:(nil)' 00:02:54.362 EAL: request: mp_malloc_sync 00:02:54.362 EAL: No shared files mode enabled, IPC is disabled 00:02:54.362 EAL: Heap on socket 0 was shrunk by 18MB 00:02:54.362 EAL: Trying to obtain current memory policy. 00:02:54.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:54.362 EAL: Restoring previous memory policy: 4 00:02:54.362 EAL: Calling mem event callback 'spdk:(nil)' 00:02:54.362 EAL: request: mp_malloc_sync 00:02:54.362 EAL: No shared files mode enabled, IPC is disabled 00:02:54.362 EAL: Heap on socket 0 was expanded by 34MB 00:02:54.362 EAL: Calling mem event callback 'spdk:(nil)' 00:02:54.362 EAL: request: mp_malloc_sync 00:02:54.362 EAL: No shared files mode enabled, IPC is disabled 00:02:54.362 EAL: Heap on socket 0 was shrunk by 34MB 00:02:54.362 EAL: Trying to obtain current memory policy. 00:02:54.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:54.362 EAL: Restoring previous memory policy: 4 00:02:54.362 EAL: Calling mem event callback 'spdk:(nil)' 00:02:54.362 EAL: request: mp_malloc_sync 00:02:54.362 EAL: No shared files mode enabled, IPC is disabled 00:02:54.362 EAL: Heap on socket 0 was expanded by 66MB 00:02:54.362 EAL: Calling mem event callback 'spdk:(nil)' 00:02:54.362 EAL: request: mp_malloc_sync 00:02:54.362 EAL: No shared files mode enabled, IPC is disabled 00:02:54.362 EAL: Heap on socket 0 was shrunk by 66MB 00:02:54.362 EAL: Trying to obtain current memory policy. 00:02:54.362 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:54.362 EAL: Restoring previous memory policy: 4 00:02:54.362 EAL: Calling mem event callback 'spdk:(nil)' 00:02:54.362 EAL: request: mp_malloc_sync 00:02:54.362 EAL: No shared files mode enabled, IPC is disabled 00:02:54.362 EAL: Heap on socket 0 was expanded by 130MB 00:02:54.620 EAL: Calling mem event callback 'spdk:(nil)' 00:02:54.620 EAL: request: mp_malloc_sync 00:02:54.620 EAL: No shared files mode enabled, IPC is disabled 00:02:54.620 EAL: Heap on socket 0 was shrunk by 130MB 00:02:54.620 EAL: Trying to obtain current memory policy. 00:02:54.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:54.620 EAL: Restoring previous memory policy: 4 00:02:54.620 EAL: Calling mem event callback 'spdk:(nil)' 00:02:54.620 EAL: request: mp_malloc_sync 00:02:54.620 EAL: No shared files mode enabled, IPC is disabled 00:02:54.620 EAL: Heap on socket 0 was expanded by 258MB 00:02:54.620 EAL: Calling mem event callback 'spdk:(nil)' 00:02:54.620 EAL: request: mp_malloc_sync 00:02:54.620 EAL: No shared files mode enabled, IPC is disabled 00:02:54.620 EAL: Heap on socket 0 was shrunk by 258MB 00:02:54.620 EAL: Trying to obtain current memory policy. 00:02:54.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:54.620 EAL: Restoring previous memory policy: 4 00:02:54.620 EAL: Calling mem event callback 'spdk:(nil)' 00:02:54.620 EAL: request: mp_malloc_sync 00:02:54.620 EAL: No shared files mode enabled, IPC is disabled 00:02:54.620 EAL: Heap on socket 0 was expanded by 514MB 00:02:54.877 EAL: Calling mem event callback 'spdk:(nil)' 00:02:54.877 EAL: request: mp_malloc_sync 00:02:54.877 EAL: No shared files mode enabled, IPC is disabled 00:02:54.877 EAL: Heap on socket 0 was shrunk by 514MB 00:02:54.877 EAL: Trying to obtain current memory policy. 00:02:54.877 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:55.135 EAL: Restoring previous memory policy: 4 00:02:55.135 EAL: Calling mem event callback 'spdk:(nil)' 00:02:55.135 EAL: request: mp_malloc_sync 00:02:55.135 EAL: No shared files mode enabled, IPC is disabled 00:02:55.135 EAL: Heap on socket 0 was expanded by 1026MB 00:02:55.135 EAL: Calling mem event callback 'spdk:(nil)' 00:02:55.393 EAL: request: mp_malloc_sync 00:02:55.393 EAL: No shared files mode enabled, IPC is disabled 00:02:55.393 EAL: Heap on socket 0 was shrunk by 1026MB 00:02:55.393 passed 00:02:55.393 00:02:55.393 Run Summary: Type Total Ran Passed Failed Inactive 00:02:55.393 suites 1 1 n/a 0 0 00:02:55.393 tests 2 2 2 0 0 00:02:55.393 asserts 497 497 497 0 n/a 00:02:55.393 00:02:55.393 Elapsed time = 0.965 seconds 00:02:55.393 EAL: Calling mem event callback 'spdk:(nil)' 00:02:55.393 EAL: request: mp_malloc_sync 00:02:55.393 EAL: No shared files mode enabled, IPC is disabled 00:02:55.393 EAL: Heap on socket 0 was shrunk by 2MB 00:02:55.393 EAL: No shared files mode enabled, IPC is disabled 00:02:55.393 EAL: No shared files mode enabled, IPC is disabled 00:02:55.393 EAL: No shared files mode enabled, IPC is disabled 00:02:55.393 00:02:55.393 real 0m1.083s 00:02:55.393 user 0m0.640s 00:02:55.393 sys 0m0.417s 00:02:55.393 04:57:31 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:55.393 04:57:31 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:02:55.393 ************************************ 00:02:55.393 END TEST env_vtophys 00:02:55.393 ************************************ 00:02:55.393 04:57:31 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:02:55.393 04:57:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:55.393 04:57:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:55.393 04:57:31 env -- common/autotest_common.sh@10 -- # set +x 00:02:55.393 ************************************ 00:02:55.393 START TEST env_pci 00:02:55.393 ************************************ 00:02:55.393 04:57:31 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:02:55.393 00:02:55.393 00:02:55.393 CUnit - A unit testing framework for C - Version 2.1-3 00:02:55.393 http://cunit.sourceforge.net/ 00:02:55.393 00:02:55.393 00:02:55.393 Suite: pci 00:02:55.393 Test: pci_hook ...[2024-12-09 04:57:31.998651] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3383691 has claimed it 00:02:55.393 EAL: Cannot find device (10000:00:01.0) 00:02:55.393 EAL: Failed to attach device on primary process 00:02:55.393 passed 00:02:55.393 00:02:55.393 Run Summary: Type Total Ran Passed Failed Inactive 00:02:55.393 suites 1 1 n/a 0 0 00:02:55.393 tests 1 1 1 0 0 00:02:55.393 asserts 25 25 25 0 n/a 00:02:55.393 00:02:55.393 Elapsed time = 0.027 seconds 00:02:55.393 00:02:55.393 real 0m0.047s 00:02:55.393 user 0m0.018s 00:02:55.393 sys 0m0.029s 00:02:55.393 04:57:32 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:55.393 04:57:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:02:55.393 ************************************ 00:02:55.393 END TEST env_pci 00:02:55.393 ************************************ 00:02:55.654 04:57:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:02:55.654 04:57:32 env -- env/env.sh@15 -- # uname 00:02:55.654 04:57:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:02:55.654 04:57:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:02:55.654 04:57:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:02:55.654 04:57:32 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:02:55.654 04:57:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:55.654 04:57:32 env -- common/autotest_common.sh@10 -- # set +x 00:02:55.654 ************************************ 00:02:55.654 START TEST env_dpdk_post_init 00:02:55.654 ************************************ 00:02:55.654 04:57:32 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:02:55.654 EAL: Detected CPU lcores: 96 00:02:55.654 EAL: Detected NUMA nodes: 2 00:02:55.654 EAL: Detected shared linkage of DPDK 00:02:55.654 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:02:55.654 EAL: Selected IOVA mode 'VA' 00:02:55.654 EAL: VFIO support initialized 00:02:55.654 TELEMETRY: No legacy callbacks, legacy socket not created 00:02:55.654 EAL: Using IOMMU type 1 (Type 1) 00:02:55.654 EAL: Ignore mapping IO port bar(1) 00:02:55.654 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:02:55.654 EAL: Ignore mapping IO port bar(1) 00:02:55.654 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:02:55.654 EAL: Ignore mapping IO port bar(1) 00:02:55.654 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:02:55.654 EAL: Ignore mapping IO port bar(1) 00:02:55.654 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:02:55.654 EAL: Ignore mapping IO port bar(1) 00:02:55.654 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:02:55.654 EAL: Ignore mapping IO port bar(1) 00:02:55.654 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:02:55.654 EAL: Ignore mapping IO port bar(1) 00:02:55.654 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:02:55.654 EAL: Ignore mapping IO port bar(1) 00:02:55.654 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:02:56.590 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:02:56.590 EAL: Ignore mapping IO port bar(1) 00:02:56.590 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:02:56.590 EAL: Ignore mapping IO port bar(1) 00:02:56.590 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:02:56.590 EAL: Ignore mapping IO port bar(1) 00:02:56.590 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:02:56.590 EAL: Ignore mapping IO port bar(1) 00:02:56.590 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:02:56.590 EAL: Ignore mapping IO port bar(1) 00:02:56.590 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:02:56.590 EAL: Ignore mapping IO port bar(1) 00:02:56.590 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:02:56.590 EAL: Ignore mapping IO port bar(1) 00:02:56.590 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:02:56.590 EAL: Ignore mapping IO port bar(1) 00:02:56.590 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:02:59.867 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:02:59.867 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:02:59.867 Starting DPDK initialization... 00:02:59.867 Starting SPDK post initialization... 00:02:59.867 SPDK NVMe probe 00:02:59.867 Attaching to 0000:5e:00.0 00:02:59.867 Attached to 0000:5e:00.0 00:02:59.867 Cleaning up... 00:02:59.867 00:02:59.867 real 0m4.381s 00:02:59.867 user 0m2.977s 00:02:59.867 sys 0m0.470s 00:02:59.867 04:57:36 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:59.867 04:57:36 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:02:59.867 ************************************ 00:02:59.867 END TEST env_dpdk_post_init 00:02:59.867 ************************************ 00:02:59.867 04:57:36 env -- env/env.sh@26 -- # uname 00:02:59.867 04:57:36 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:02:59.867 04:57:36 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:02:59.867 04:57:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:59.867 04:57:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:59.867 04:57:36 env -- common/autotest_common.sh@10 -- # set +x 00:03:00.125 ************************************ 00:03:00.125 START TEST env_mem_callbacks 00:03:00.125 ************************************ 00:03:00.125 04:57:36 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:00.125 EAL: Detected CPU lcores: 96 00:03:00.125 EAL: Detected NUMA nodes: 2 00:03:00.125 EAL: Detected shared linkage of DPDK 00:03:00.125 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:00.125 EAL: Selected IOVA mode 'VA' 00:03:00.125 EAL: VFIO support initialized 00:03:00.125 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:00.125 00:03:00.125 00:03:00.125 CUnit - A unit testing framework for C - Version 2.1-3 00:03:00.125 http://cunit.sourceforge.net/ 00:03:00.125 00:03:00.125 00:03:00.125 Suite: memory 00:03:00.125 Test: test ... 00:03:00.125 register 0x200000200000 2097152 00:03:00.125 malloc 3145728 00:03:00.125 register 0x200000400000 4194304 00:03:00.125 buf 0x200000500000 len 3145728 PASSED 00:03:00.125 malloc 64 00:03:00.125 buf 0x2000004fff40 len 64 PASSED 00:03:00.125 malloc 4194304 00:03:00.125 register 0x200000800000 6291456 00:03:00.125 buf 0x200000a00000 len 4194304 PASSED 00:03:00.125 free 0x200000500000 3145728 00:03:00.125 free 0x2000004fff40 64 00:03:00.125 unregister 0x200000400000 4194304 PASSED 00:03:00.125 free 0x200000a00000 4194304 00:03:00.125 unregister 0x200000800000 6291456 PASSED 00:03:00.125 malloc 8388608 00:03:00.125 register 0x200000400000 10485760 00:03:00.125 buf 0x200000600000 len 8388608 PASSED 00:03:00.125 free 0x200000600000 8388608 00:03:00.125 unregister 0x200000400000 10485760 PASSED 00:03:00.125 passed 00:03:00.125 00:03:00.125 Run Summary: Type Total Ran Passed Failed Inactive 00:03:00.125 suites 1 1 n/a 0 0 00:03:00.125 tests 1 1 1 0 0 00:03:00.125 asserts 15 15 15 0 n/a 00:03:00.125 00:03:00.125 Elapsed time = 0.005 seconds 00:03:00.125 00:03:00.125 real 0m0.053s 00:03:00.125 user 0m0.018s 00:03:00.125 sys 0m0.035s 00:03:00.125 04:57:36 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:00.125 04:57:36 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:00.125 ************************************ 00:03:00.125 END TEST env_mem_callbacks 00:03:00.125 ************************************ 00:03:00.125 00:03:00.125 real 0m6.169s 00:03:00.125 user 0m4.000s 00:03:00.125 sys 0m1.240s 00:03:00.125 04:57:36 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:00.125 04:57:36 env -- common/autotest_common.sh@10 -- # set +x 00:03:00.125 ************************************ 00:03:00.125 END TEST env 00:03:00.125 ************************************ 00:03:00.125 04:57:36 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:00.125 04:57:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:00.125 04:57:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:00.125 04:57:36 -- common/autotest_common.sh@10 -- # set +x 00:03:00.125 ************************************ 00:03:00.125 START TEST rpc 00:03:00.125 ************************************ 00:03:00.125 04:57:36 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:00.125 * Looking for test storage... 00:03:00.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:00.125 04:57:36 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:00.125 04:57:36 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:00.125 04:57:36 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:00.383 04:57:36 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:00.383 04:57:36 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:00.383 04:57:36 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:00.383 04:57:36 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:00.383 04:57:36 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:00.383 04:57:36 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:00.383 04:57:36 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:00.383 04:57:36 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:00.383 04:57:36 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:00.383 04:57:36 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:00.383 04:57:36 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:00.383 04:57:36 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:00.383 04:57:36 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:00.383 04:57:36 rpc -- scripts/common.sh@345 -- # : 1 00:03:00.383 04:57:36 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:00.383 04:57:36 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.383 04:57:36 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:00.383 04:57:36 rpc -- scripts/common.sh@353 -- # local d=1 00:03:00.383 04:57:36 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:00.383 04:57:36 rpc -- scripts/common.sh@355 -- # echo 1 00:03:00.383 04:57:36 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:00.383 04:57:36 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:00.383 04:57:36 rpc -- scripts/common.sh@353 -- # local d=2 00:03:00.383 04:57:36 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:00.383 04:57:36 rpc -- scripts/common.sh@355 -- # echo 2 00:03:00.383 04:57:36 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:00.383 04:57:36 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:00.383 04:57:36 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:00.383 04:57:36 rpc -- scripts/common.sh@368 -- # return 0 00:03:00.383 04:57:36 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:00.383 04:57:36 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:00.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.383 --rc genhtml_branch_coverage=1 00:03:00.383 --rc genhtml_function_coverage=1 00:03:00.383 --rc genhtml_legend=1 00:03:00.383 --rc geninfo_all_blocks=1 00:03:00.383 --rc geninfo_unexecuted_blocks=1 00:03:00.383 00:03:00.383 ' 00:03:00.383 04:57:36 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:00.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.383 --rc genhtml_branch_coverage=1 00:03:00.383 --rc genhtml_function_coverage=1 00:03:00.383 --rc genhtml_legend=1 00:03:00.383 --rc geninfo_all_blocks=1 00:03:00.383 --rc geninfo_unexecuted_blocks=1 00:03:00.383 00:03:00.383 ' 00:03:00.383 04:57:36 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:00.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.383 --rc genhtml_branch_coverage=1 00:03:00.383 --rc genhtml_function_coverage=1 00:03:00.383 --rc genhtml_legend=1 00:03:00.383 --rc geninfo_all_blocks=1 00:03:00.383 --rc geninfo_unexecuted_blocks=1 00:03:00.383 00:03:00.383 ' 00:03:00.383 04:57:36 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:00.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.383 --rc genhtml_branch_coverage=1 00:03:00.383 --rc genhtml_function_coverage=1 00:03:00.383 --rc genhtml_legend=1 00:03:00.383 --rc geninfo_all_blocks=1 00:03:00.383 --rc geninfo_unexecuted_blocks=1 00:03:00.383 00:03:00.383 ' 00:03:00.383 04:57:36 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3384620 00:03:00.383 04:57:36 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:00.383 04:57:36 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:00.383 04:57:36 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3384620 00:03:00.383 04:57:36 rpc -- common/autotest_common.sh@835 -- # '[' -z 3384620 ']' 00:03:00.383 04:57:36 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:00.383 04:57:36 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:00.383 04:57:36 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:00.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:00.383 04:57:36 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:00.383 04:57:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:00.383 [2024-12-09 04:57:36.902880] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:03:00.383 [2024-12-09 04:57:36.902925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3384620 ] 00:03:00.383 [2024-12-09 04:57:36.967549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:00.383 [2024-12-09 04:57:37.007234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:00.383 [2024-12-09 04:57:37.007275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3384620' to capture a snapshot of events at runtime. 00:03:00.383 [2024-12-09 04:57:37.007285] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:00.383 [2024-12-09 04:57:37.007292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:00.383 [2024-12-09 04:57:37.007297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3384620 for offline analysis/debug. 00:03:00.383 [2024-12-09 04:57:37.007865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:00.641 04:57:37 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:00.641 04:57:37 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:00.641 04:57:37 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:00.641 04:57:37 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:00.641 04:57:37 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:00.641 04:57:37 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:00.641 04:57:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:00.641 04:57:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:00.641 04:57:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:00.641 ************************************ 00:03:00.641 START TEST rpc_integrity 00:03:00.641 ************************************ 00:03:00.641 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:00.641 04:57:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:00.641 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.641 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.641 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.641 04:57:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:00.641 04:57:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:00.898 04:57:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:00.898 04:57:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:00.898 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.898 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.898 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.898 04:57:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:00.898 04:57:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:00.898 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.898 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.898 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.898 04:57:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:00.898 { 00:03:00.898 "name": "Malloc0", 00:03:00.898 "aliases": [ 00:03:00.898 "82b4fba1-0da6-4899-8de9-ab14e23932d7" 00:03:00.898 ], 00:03:00.898 "product_name": "Malloc disk", 00:03:00.898 "block_size": 512, 00:03:00.898 "num_blocks": 16384, 00:03:00.898 "uuid": "82b4fba1-0da6-4899-8de9-ab14e23932d7", 00:03:00.898 "assigned_rate_limits": { 00:03:00.898 "rw_ios_per_sec": 0, 00:03:00.898 "rw_mbytes_per_sec": 0, 00:03:00.898 "r_mbytes_per_sec": 0, 00:03:00.898 "w_mbytes_per_sec": 0 00:03:00.898 }, 00:03:00.898 "claimed": false, 00:03:00.898 "zoned": false, 00:03:00.898 "supported_io_types": { 00:03:00.898 "read": true, 00:03:00.898 "write": true, 00:03:00.898 "unmap": true, 00:03:00.898 "flush": true, 00:03:00.898 "reset": true, 00:03:00.898 "nvme_admin": false, 00:03:00.898 "nvme_io": false, 00:03:00.898 "nvme_io_md": false, 00:03:00.898 "write_zeroes": true, 00:03:00.898 "zcopy": true, 00:03:00.898 "get_zone_info": false, 00:03:00.898 "zone_management": false, 00:03:00.898 "zone_append": false, 00:03:00.898 "compare": false, 00:03:00.898 "compare_and_write": false, 00:03:00.898 "abort": true, 00:03:00.898 "seek_hole": false, 00:03:00.898 "seek_data": false, 00:03:00.898 "copy": true, 00:03:00.898 "nvme_iov_md": false 00:03:00.898 }, 00:03:00.898 "memory_domains": [ 00:03:00.898 { 00:03:00.898 "dma_device_id": "system", 00:03:00.898 "dma_device_type": 1 00:03:00.898 }, 00:03:00.898 { 00:03:00.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:00.898 "dma_device_type": 2 00:03:00.898 } 00:03:00.898 ], 00:03:00.898 "driver_specific": {} 00:03:00.898 } 00:03:00.898 ]' 00:03:00.898 04:57:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:00.898 04:57:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:00.898 04:57:37 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:00.898 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.898 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.898 [2024-12-09 04:57:37.388224] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:00.898 [2024-12-09 04:57:37.388255] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:00.898 [2024-12-09 04:57:37.388265] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a89280 00:03:00.898 [2024-12-09 04:57:37.388272] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:00.898 [2024-12-09 04:57:37.389383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:00.898 [2024-12-09 04:57:37.389406] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:00.898 Passthru0 00:03:00.898 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.898 04:57:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:00.898 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.898 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.898 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.898 04:57:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:00.898 { 00:03:00.898 "name": "Malloc0", 00:03:00.898 "aliases": [ 00:03:00.898 "82b4fba1-0da6-4899-8de9-ab14e23932d7" 00:03:00.898 ], 00:03:00.898 "product_name": "Malloc disk", 00:03:00.898 "block_size": 512, 00:03:00.898 "num_blocks": 16384, 00:03:00.898 "uuid": "82b4fba1-0da6-4899-8de9-ab14e23932d7", 00:03:00.898 "assigned_rate_limits": { 00:03:00.898 "rw_ios_per_sec": 0, 00:03:00.898 "rw_mbytes_per_sec": 0, 00:03:00.898 "r_mbytes_per_sec": 0, 00:03:00.898 "w_mbytes_per_sec": 0 00:03:00.898 }, 00:03:00.898 "claimed": true, 00:03:00.898 "claim_type": "exclusive_write", 00:03:00.898 "zoned": false, 00:03:00.898 "supported_io_types": { 00:03:00.898 "read": true, 00:03:00.898 "write": true, 00:03:00.898 "unmap": true, 00:03:00.898 "flush": true, 00:03:00.898 "reset": true, 00:03:00.898 "nvme_admin": false, 00:03:00.898 "nvme_io": false, 00:03:00.898 "nvme_io_md": false, 00:03:00.898 "write_zeroes": true, 00:03:00.898 "zcopy": true, 00:03:00.898 "get_zone_info": false, 00:03:00.898 "zone_management": false, 00:03:00.898 "zone_append": false, 00:03:00.898 "compare": false, 00:03:00.898 "compare_and_write": false, 00:03:00.898 "abort": true, 00:03:00.898 "seek_hole": false, 00:03:00.898 "seek_data": false, 00:03:00.898 "copy": true, 00:03:00.898 "nvme_iov_md": false 00:03:00.898 }, 00:03:00.898 "memory_domains": [ 00:03:00.898 { 00:03:00.898 "dma_device_id": "system", 00:03:00.898 "dma_device_type": 1 00:03:00.898 }, 00:03:00.898 { 00:03:00.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:00.898 "dma_device_type": 2 00:03:00.898 } 00:03:00.898 ], 00:03:00.898 "driver_specific": {} 00:03:00.898 }, 00:03:00.898 { 00:03:00.898 "name": "Passthru0", 00:03:00.898 "aliases": [ 00:03:00.898 "0f02777b-a4c8-5b2c-a499-7c3f1f78a847" 00:03:00.898 ], 00:03:00.898 "product_name": "passthru", 00:03:00.898 "block_size": 512, 00:03:00.898 "num_blocks": 16384, 00:03:00.898 "uuid": "0f02777b-a4c8-5b2c-a499-7c3f1f78a847", 00:03:00.898 "assigned_rate_limits": { 00:03:00.898 "rw_ios_per_sec": 0, 00:03:00.898 "rw_mbytes_per_sec": 0, 00:03:00.898 "r_mbytes_per_sec": 0, 00:03:00.898 "w_mbytes_per_sec": 0 00:03:00.898 }, 00:03:00.898 "claimed": false, 00:03:00.898 "zoned": false, 00:03:00.898 "supported_io_types": { 00:03:00.898 "read": true, 00:03:00.898 "write": true, 00:03:00.898 "unmap": true, 00:03:00.898 "flush": true, 00:03:00.898 "reset": true, 00:03:00.898 "nvme_admin": false, 00:03:00.898 "nvme_io": false, 00:03:00.898 "nvme_io_md": false, 00:03:00.898 "write_zeroes": true, 00:03:00.898 "zcopy": true, 00:03:00.898 "get_zone_info": false, 00:03:00.898 "zone_management": false, 00:03:00.898 "zone_append": false, 00:03:00.898 "compare": false, 00:03:00.898 "compare_and_write": false, 00:03:00.898 "abort": true, 00:03:00.898 "seek_hole": false, 00:03:00.898 "seek_data": false, 00:03:00.898 "copy": true, 00:03:00.898 "nvme_iov_md": false 00:03:00.898 }, 00:03:00.898 "memory_domains": [ 00:03:00.898 { 00:03:00.898 "dma_device_id": "system", 00:03:00.898 "dma_device_type": 1 00:03:00.898 }, 00:03:00.898 { 00:03:00.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:00.898 "dma_device_type": 2 00:03:00.898 } 00:03:00.898 ], 00:03:00.898 "driver_specific": { 00:03:00.898 "passthru": { 00:03:00.898 "name": "Passthru0", 00:03:00.898 "base_bdev_name": "Malloc0" 00:03:00.898 } 00:03:00.898 } 00:03:00.898 } 00:03:00.898 ]' 00:03:00.898 04:57:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:00.898 04:57:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:00.898 04:57:37 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:00.898 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.899 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.899 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.899 04:57:37 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:00.899 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.899 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.899 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.899 04:57:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:00.899 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:00.899 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.899 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:00.899 04:57:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:00.899 04:57:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:00.899 04:57:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:00.899 00:03:00.899 real 0m0.247s 00:03:00.899 user 0m0.157s 00:03:00.899 sys 0m0.035s 00:03:00.899 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:00.899 04:57:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:00.899 ************************************ 00:03:00.899 END TEST rpc_integrity 00:03:00.899 ************************************ 00:03:00.899 04:57:37 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:00.899 04:57:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:00.899 04:57:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:00.899 04:57:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:01.156 ************************************ 00:03:01.156 START TEST rpc_plugins 00:03:01.156 ************************************ 00:03:01.156 04:57:37 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:01.156 04:57:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:01.156 04:57:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.156 04:57:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:01.156 04:57:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.156 04:57:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:01.156 04:57:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:01.156 04:57:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.156 04:57:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:01.156 04:57:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.156 04:57:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:01.156 { 00:03:01.156 "name": "Malloc1", 00:03:01.156 "aliases": [ 00:03:01.156 "490c21be-7b37-49cf-8117-6f7c39208f0f" 00:03:01.156 ], 00:03:01.156 "product_name": "Malloc disk", 00:03:01.156 "block_size": 4096, 00:03:01.156 "num_blocks": 256, 00:03:01.156 "uuid": "490c21be-7b37-49cf-8117-6f7c39208f0f", 00:03:01.156 "assigned_rate_limits": { 00:03:01.156 "rw_ios_per_sec": 0, 00:03:01.156 "rw_mbytes_per_sec": 0, 00:03:01.156 "r_mbytes_per_sec": 0, 00:03:01.156 "w_mbytes_per_sec": 0 00:03:01.156 }, 00:03:01.156 "claimed": false, 00:03:01.156 "zoned": false, 00:03:01.156 "supported_io_types": { 00:03:01.156 "read": true, 00:03:01.156 "write": true, 00:03:01.156 "unmap": true, 00:03:01.156 "flush": true, 00:03:01.156 "reset": true, 00:03:01.156 "nvme_admin": false, 00:03:01.156 "nvme_io": false, 00:03:01.156 "nvme_io_md": false, 00:03:01.156 "write_zeroes": true, 00:03:01.156 "zcopy": true, 00:03:01.156 "get_zone_info": false, 00:03:01.156 "zone_management": false, 00:03:01.156 "zone_append": false, 00:03:01.156 "compare": false, 00:03:01.156 "compare_and_write": false, 00:03:01.156 "abort": true, 00:03:01.156 "seek_hole": false, 00:03:01.156 "seek_data": false, 00:03:01.156 "copy": true, 00:03:01.156 "nvme_iov_md": false 00:03:01.156 }, 00:03:01.156 "memory_domains": [ 00:03:01.156 { 00:03:01.156 "dma_device_id": "system", 00:03:01.156 "dma_device_type": 1 00:03:01.156 }, 00:03:01.156 { 00:03:01.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:01.156 "dma_device_type": 2 00:03:01.156 } 00:03:01.156 ], 00:03:01.156 "driver_specific": {} 00:03:01.156 } 00:03:01.156 ]' 00:03:01.156 04:57:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:01.156 04:57:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:01.156 04:57:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:01.156 04:57:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.156 04:57:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:01.156 04:57:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.156 04:57:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:01.156 04:57:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.156 04:57:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:01.156 04:57:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.156 04:57:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:01.156 04:57:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:01.156 04:57:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:01.156 00:03:01.156 real 0m0.119s 00:03:01.156 user 0m0.071s 00:03:01.156 sys 0m0.020s 00:03:01.156 04:57:37 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:01.156 04:57:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:01.156 ************************************ 00:03:01.156 END TEST rpc_plugins 00:03:01.156 ************************************ 00:03:01.156 04:57:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:01.156 04:57:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:01.156 04:57:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:01.156 04:57:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:01.156 ************************************ 00:03:01.156 START TEST rpc_trace_cmd_test 00:03:01.156 ************************************ 00:03:01.156 04:57:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:01.156 04:57:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:01.156 04:57:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:01.156 04:57:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.156 04:57:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:01.156 04:57:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.156 04:57:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:01.156 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3384620", 00:03:01.156 "tpoint_group_mask": "0x8", 00:03:01.156 "iscsi_conn": { 00:03:01.156 "mask": "0x2", 00:03:01.156 "tpoint_mask": "0x0" 00:03:01.156 }, 00:03:01.156 "scsi": { 00:03:01.156 "mask": "0x4", 00:03:01.156 "tpoint_mask": "0x0" 00:03:01.156 }, 00:03:01.156 "bdev": { 00:03:01.156 "mask": "0x8", 00:03:01.156 "tpoint_mask": "0xffffffffffffffff" 00:03:01.156 }, 00:03:01.156 "nvmf_rdma": { 00:03:01.156 "mask": "0x10", 00:03:01.156 "tpoint_mask": "0x0" 00:03:01.156 }, 00:03:01.156 "nvmf_tcp": { 00:03:01.156 "mask": "0x20", 00:03:01.156 "tpoint_mask": "0x0" 00:03:01.156 }, 00:03:01.156 "ftl": { 00:03:01.156 "mask": "0x40", 00:03:01.156 "tpoint_mask": "0x0" 00:03:01.156 }, 00:03:01.156 "blobfs": { 00:03:01.156 "mask": "0x80", 00:03:01.156 "tpoint_mask": "0x0" 00:03:01.156 }, 00:03:01.156 "dsa": { 00:03:01.156 "mask": "0x200", 00:03:01.156 "tpoint_mask": "0x0" 00:03:01.156 }, 00:03:01.156 "thread": { 00:03:01.156 "mask": "0x400", 00:03:01.156 "tpoint_mask": "0x0" 00:03:01.156 }, 00:03:01.156 "nvme_pcie": { 00:03:01.156 "mask": "0x800", 00:03:01.156 "tpoint_mask": "0x0" 00:03:01.156 }, 00:03:01.156 "iaa": { 00:03:01.156 "mask": "0x1000", 00:03:01.156 "tpoint_mask": "0x0" 00:03:01.156 }, 00:03:01.156 "nvme_tcp": { 00:03:01.156 "mask": "0x2000", 00:03:01.156 "tpoint_mask": "0x0" 00:03:01.156 }, 00:03:01.156 "bdev_nvme": { 00:03:01.156 "mask": "0x4000", 00:03:01.156 "tpoint_mask": "0x0" 00:03:01.156 }, 00:03:01.156 "sock": { 00:03:01.156 "mask": "0x8000", 00:03:01.156 "tpoint_mask": "0x0" 00:03:01.156 }, 00:03:01.156 "blob": { 00:03:01.156 "mask": "0x10000", 00:03:01.156 "tpoint_mask": "0x0" 00:03:01.156 }, 00:03:01.156 "bdev_raid": { 00:03:01.156 "mask": "0x20000", 00:03:01.156 "tpoint_mask": "0x0" 00:03:01.156 }, 00:03:01.156 "scheduler": { 00:03:01.156 "mask": "0x40000", 00:03:01.156 "tpoint_mask": "0x0" 00:03:01.156 } 00:03:01.156 }' 00:03:01.156 04:57:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:01.156 04:57:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:01.156 04:57:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:01.413 04:57:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:01.413 04:57:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:01.413 04:57:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:01.413 04:57:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:01.413 04:57:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:01.413 04:57:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:01.413 04:57:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:01.414 00:03:01.414 real 0m0.210s 00:03:01.414 user 0m0.182s 00:03:01.414 sys 0m0.020s 00:03:01.414 04:57:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:01.414 04:57:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:01.414 ************************************ 00:03:01.414 END TEST rpc_trace_cmd_test 00:03:01.414 ************************************ 00:03:01.414 04:57:37 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:01.414 04:57:37 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:01.414 04:57:37 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:01.414 04:57:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:01.414 04:57:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:01.414 04:57:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:01.414 ************************************ 00:03:01.414 START TEST rpc_daemon_integrity 00:03:01.414 ************************************ 00:03:01.414 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:01.414 04:57:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:01.414 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.414 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:01.414 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.414 04:57:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:01.414 04:57:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:01.671 04:57:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:01.671 04:57:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:01.671 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.671 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:01.671 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.671 04:57:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:01.671 04:57:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:01.671 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.671 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:01.671 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.671 04:57:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:01.671 { 00:03:01.671 "name": "Malloc2", 00:03:01.671 "aliases": [ 00:03:01.671 "c1f32fd6-c759-4d31-a44a-5ab2dda2bb65" 00:03:01.672 ], 00:03:01.672 "product_name": "Malloc disk", 00:03:01.672 "block_size": 512, 00:03:01.672 "num_blocks": 16384, 00:03:01.672 "uuid": "c1f32fd6-c759-4d31-a44a-5ab2dda2bb65", 00:03:01.672 "assigned_rate_limits": { 00:03:01.672 "rw_ios_per_sec": 0, 00:03:01.672 "rw_mbytes_per_sec": 0, 00:03:01.672 "r_mbytes_per_sec": 0, 00:03:01.672 "w_mbytes_per_sec": 0 00:03:01.672 }, 00:03:01.672 "claimed": false, 00:03:01.672 "zoned": false, 00:03:01.672 "supported_io_types": { 00:03:01.672 "read": true, 00:03:01.672 "write": true, 00:03:01.672 "unmap": true, 00:03:01.672 "flush": true, 00:03:01.672 "reset": true, 00:03:01.672 "nvme_admin": false, 00:03:01.672 "nvme_io": false, 00:03:01.672 "nvme_io_md": false, 00:03:01.672 "write_zeroes": true, 00:03:01.672 "zcopy": true, 00:03:01.672 "get_zone_info": false, 00:03:01.672 "zone_management": false, 00:03:01.672 "zone_append": false, 00:03:01.672 "compare": false, 00:03:01.672 "compare_and_write": false, 00:03:01.672 "abort": true, 00:03:01.672 "seek_hole": false, 00:03:01.672 "seek_data": false, 00:03:01.672 "copy": true, 00:03:01.672 "nvme_iov_md": false 00:03:01.672 }, 00:03:01.672 "memory_domains": [ 00:03:01.672 { 00:03:01.672 "dma_device_id": "system", 00:03:01.672 "dma_device_type": 1 00:03:01.672 }, 00:03:01.672 { 00:03:01.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:01.672 "dma_device_type": 2 00:03:01.672 } 00:03:01.672 ], 00:03:01.672 "driver_specific": {} 00:03:01.672 } 00:03:01.672 ]' 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:01.672 [2024-12-09 04:57:38.154315] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:01.672 [2024-12-09 04:57:38.154344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:01.672 [2024-12-09 04:57:38.154355] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a8b150 00:03:01.672 [2024-12-09 04:57:38.154363] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:01.672 [2024-12-09 04:57:38.155366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:01.672 [2024-12-09 04:57:38.155389] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:01.672 Passthru0 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:01.672 { 00:03:01.672 "name": "Malloc2", 00:03:01.672 "aliases": [ 00:03:01.672 "c1f32fd6-c759-4d31-a44a-5ab2dda2bb65" 00:03:01.672 ], 00:03:01.672 "product_name": "Malloc disk", 00:03:01.672 "block_size": 512, 00:03:01.672 "num_blocks": 16384, 00:03:01.672 "uuid": "c1f32fd6-c759-4d31-a44a-5ab2dda2bb65", 00:03:01.672 "assigned_rate_limits": { 00:03:01.672 "rw_ios_per_sec": 0, 00:03:01.672 "rw_mbytes_per_sec": 0, 00:03:01.672 "r_mbytes_per_sec": 0, 00:03:01.672 "w_mbytes_per_sec": 0 00:03:01.672 }, 00:03:01.672 "claimed": true, 00:03:01.672 "claim_type": "exclusive_write", 00:03:01.672 "zoned": false, 00:03:01.672 "supported_io_types": { 00:03:01.672 "read": true, 00:03:01.672 "write": true, 00:03:01.672 "unmap": true, 00:03:01.672 "flush": true, 00:03:01.672 "reset": true, 00:03:01.672 "nvme_admin": false, 00:03:01.672 "nvme_io": false, 00:03:01.672 "nvme_io_md": false, 00:03:01.672 "write_zeroes": true, 00:03:01.672 "zcopy": true, 00:03:01.672 "get_zone_info": false, 00:03:01.672 "zone_management": false, 00:03:01.672 "zone_append": false, 00:03:01.672 "compare": false, 00:03:01.672 "compare_and_write": false, 00:03:01.672 "abort": true, 00:03:01.672 "seek_hole": false, 00:03:01.672 "seek_data": false, 00:03:01.672 "copy": true, 00:03:01.672 "nvme_iov_md": false 00:03:01.672 }, 00:03:01.672 "memory_domains": [ 00:03:01.672 { 00:03:01.672 "dma_device_id": "system", 00:03:01.672 "dma_device_type": 1 00:03:01.672 }, 00:03:01.672 { 00:03:01.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:01.672 "dma_device_type": 2 00:03:01.672 } 00:03:01.672 ], 00:03:01.672 "driver_specific": {} 00:03:01.672 }, 00:03:01.672 { 00:03:01.672 "name": "Passthru0", 00:03:01.672 "aliases": [ 00:03:01.672 "79c5228c-424e-535f-9e82-88cad3870e4a" 00:03:01.672 ], 00:03:01.672 "product_name": "passthru", 00:03:01.672 "block_size": 512, 00:03:01.672 "num_blocks": 16384, 00:03:01.672 "uuid": "79c5228c-424e-535f-9e82-88cad3870e4a", 00:03:01.672 "assigned_rate_limits": { 00:03:01.672 "rw_ios_per_sec": 0, 00:03:01.672 "rw_mbytes_per_sec": 0, 00:03:01.672 "r_mbytes_per_sec": 0, 00:03:01.672 "w_mbytes_per_sec": 0 00:03:01.672 }, 00:03:01.672 "claimed": false, 00:03:01.672 "zoned": false, 00:03:01.672 "supported_io_types": { 00:03:01.672 "read": true, 00:03:01.672 "write": true, 00:03:01.672 "unmap": true, 00:03:01.672 "flush": true, 00:03:01.672 "reset": true, 00:03:01.672 "nvme_admin": false, 00:03:01.672 "nvme_io": false, 00:03:01.672 "nvme_io_md": false, 00:03:01.672 "write_zeroes": true, 00:03:01.672 "zcopy": true, 00:03:01.672 "get_zone_info": false, 00:03:01.672 "zone_management": false, 00:03:01.672 "zone_append": false, 00:03:01.672 "compare": false, 00:03:01.672 "compare_and_write": false, 00:03:01.672 "abort": true, 00:03:01.672 "seek_hole": false, 00:03:01.672 "seek_data": false, 00:03:01.672 "copy": true, 00:03:01.672 "nvme_iov_md": false 00:03:01.672 }, 00:03:01.672 "memory_domains": [ 00:03:01.672 { 00:03:01.672 "dma_device_id": "system", 00:03:01.672 "dma_device_type": 1 00:03:01.672 }, 00:03:01.672 { 00:03:01.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:01.672 "dma_device_type": 2 00:03:01.672 } 00:03:01.672 ], 00:03:01.672 "driver_specific": { 00:03:01.672 "passthru": { 00:03:01.672 "name": "Passthru0", 00:03:01.672 "base_bdev_name": "Malloc2" 00:03:01.672 } 00:03:01.672 } 00:03:01.672 } 00:03:01.672 ]' 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:01.672 00:03:01.672 real 0m0.262s 00:03:01.672 user 0m0.167s 00:03:01.672 sys 0m0.045s 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:01.672 04:57:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:01.672 ************************************ 00:03:01.672 END TEST rpc_daemon_integrity 00:03:01.672 ************************************ 00:03:01.930 04:57:38 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:01.930 04:57:38 rpc -- rpc/rpc.sh@84 -- # killprocess 3384620 00:03:01.930 04:57:38 rpc -- common/autotest_common.sh@954 -- # '[' -z 3384620 ']' 00:03:01.930 04:57:38 rpc -- common/autotest_common.sh@958 -- # kill -0 3384620 00:03:01.930 04:57:38 rpc -- common/autotest_common.sh@959 -- # uname 00:03:01.930 04:57:38 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:01.930 04:57:38 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3384620 00:03:01.930 04:57:38 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:01.930 04:57:38 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:01.930 04:57:38 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3384620' 00:03:01.930 killing process with pid 3384620 00:03:01.930 04:57:38 rpc -- common/autotest_common.sh@973 -- # kill 3384620 00:03:01.930 04:57:38 rpc -- common/autotest_common.sh@978 -- # wait 3384620 00:03:02.189 00:03:02.189 real 0m2.035s 00:03:02.189 user 0m2.561s 00:03:02.189 sys 0m0.685s 00:03:02.189 04:57:38 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:02.189 04:57:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:02.189 ************************************ 00:03:02.189 END TEST rpc 00:03:02.189 ************************************ 00:03:02.189 04:57:38 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:02.189 04:57:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:02.189 04:57:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:02.189 04:57:38 -- common/autotest_common.sh@10 -- # set +x 00:03:02.189 ************************************ 00:03:02.189 START TEST skip_rpc 00:03:02.189 ************************************ 00:03:02.189 04:57:38 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:02.447 * Looking for test storage... 00:03:02.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:02.447 04:57:38 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:02.447 04:57:38 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:02.447 04:57:38 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:02.447 04:57:38 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:02.447 04:57:38 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:02.447 04:57:38 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:02.447 04:57:38 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:02.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.447 --rc genhtml_branch_coverage=1 00:03:02.447 --rc genhtml_function_coverage=1 00:03:02.447 --rc genhtml_legend=1 00:03:02.447 --rc geninfo_all_blocks=1 00:03:02.447 --rc geninfo_unexecuted_blocks=1 00:03:02.447 00:03:02.447 ' 00:03:02.447 04:57:38 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:02.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.447 --rc genhtml_branch_coverage=1 00:03:02.447 --rc genhtml_function_coverage=1 00:03:02.447 --rc genhtml_legend=1 00:03:02.447 --rc geninfo_all_blocks=1 00:03:02.447 --rc geninfo_unexecuted_blocks=1 00:03:02.447 00:03:02.447 ' 00:03:02.447 04:57:38 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:02.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.447 --rc genhtml_branch_coverage=1 00:03:02.447 --rc genhtml_function_coverage=1 00:03:02.447 --rc genhtml_legend=1 00:03:02.447 --rc geninfo_all_blocks=1 00:03:02.447 --rc geninfo_unexecuted_blocks=1 00:03:02.447 00:03:02.447 ' 00:03:02.447 04:57:38 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:02.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.447 --rc genhtml_branch_coverage=1 00:03:02.447 --rc genhtml_function_coverage=1 00:03:02.447 --rc genhtml_legend=1 00:03:02.447 --rc geninfo_all_blocks=1 00:03:02.447 --rc geninfo_unexecuted_blocks=1 00:03:02.447 00:03:02.447 ' 00:03:02.447 04:57:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:02.447 04:57:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:02.447 04:57:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:02.447 04:57:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:02.447 04:57:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:02.447 04:57:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:02.447 ************************************ 00:03:02.447 START TEST skip_rpc 00:03:02.447 ************************************ 00:03:02.448 04:57:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:02.448 04:57:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3385253 00:03:02.448 04:57:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:02.448 04:57:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:02.448 04:57:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:02.448 [2024-12-09 04:57:39.046272] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:03:02.448 [2024-12-09 04:57:39.046317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385253 ] 00:03:02.706 [2024-12-09 04:57:39.109946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:02.706 [2024-12-09 04:57:39.150078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:07.969 04:57:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:07.969 04:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:07.969 04:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3385253 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3385253 ']' 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3385253 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3385253 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3385253' 00:03:07.969 killing process with pid 3385253 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3385253 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3385253 00:03:07.969 00:03:07.969 real 0m5.404s 00:03:07.969 user 0m5.183s 00:03:07.969 sys 0m0.261s 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:07.969 04:57:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:07.969 ************************************ 00:03:07.969 END TEST skip_rpc 00:03:07.969 ************************************ 00:03:07.969 04:57:44 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:07.969 04:57:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:07.969 04:57:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:07.969 04:57:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:07.969 ************************************ 00:03:07.969 START TEST skip_rpc_with_json 00:03:07.969 ************************************ 00:03:07.969 04:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:07.969 04:57:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:07.969 04:57:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3386192 00:03:07.969 04:57:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:07.969 04:57:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:07.969 04:57:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3386192 00:03:07.969 04:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3386192 ']' 00:03:07.969 04:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:07.969 04:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:07.969 04:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:07.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:07.969 04:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:07.969 04:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:07.969 [2024-12-09 04:57:44.520213] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:03:07.969 [2024-12-09 04:57:44.520257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3386192 ] 00:03:07.969 [2024-12-09 04:57:44.583286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:08.228 [2024-12-09 04:57:44.626794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:08.228 04:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:08.228 04:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:08.228 04:57:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:08.228 04:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:08.228 04:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:08.228 [2024-12-09 04:57:44.849445] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:08.228 request: 00:03:08.228 { 00:03:08.228 "trtype": "tcp", 00:03:08.228 "method": "nvmf_get_transports", 00:03:08.228 "req_id": 1 00:03:08.228 } 00:03:08.228 Got JSON-RPC error response 00:03:08.228 response: 00:03:08.228 { 00:03:08.228 "code": -19, 00:03:08.228 "message": "No such device" 00:03:08.228 } 00:03:08.228 04:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:08.228 04:57:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:08.228 04:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:08.228 04:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:08.228 [2024-12-09 04:57:44.861554] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:08.228 04:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:08.228 04:57:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:08.228 04:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:08.228 04:57:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:08.486 04:57:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:08.486 04:57:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:08.486 { 00:03:08.486 "subsystems": [ 00:03:08.486 { 00:03:08.486 "subsystem": "fsdev", 00:03:08.486 "config": [ 00:03:08.486 { 00:03:08.486 "method": "fsdev_set_opts", 00:03:08.486 "params": { 00:03:08.486 "fsdev_io_pool_size": 65535, 00:03:08.486 "fsdev_io_cache_size": 256 00:03:08.486 } 00:03:08.486 } 00:03:08.486 ] 00:03:08.486 }, 00:03:08.486 { 00:03:08.486 "subsystem": "vfio_user_target", 00:03:08.486 "config": null 00:03:08.486 }, 00:03:08.486 { 00:03:08.487 "subsystem": "keyring", 00:03:08.487 "config": [] 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "subsystem": "iobuf", 00:03:08.487 "config": [ 00:03:08.487 { 00:03:08.487 "method": "iobuf_set_options", 00:03:08.487 "params": { 00:03:08.487 "small_pool_count": 8192, 00:03:08.487 "large_pool_count": 1024, 00:03:08.487 "small_bufsize": 8192, 00:03:08.487 "large_bufsize": 135168, 00:03:08.487 "enable_numa": false 00:03:08.487 } 00:03:08.487 } 00:03:08.487 ] 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "subsystem": "sock", 00:03:08.487 "config": [ 00:03:08.487 { 00:03:08.487 "method": "sock_set_default_impl", 00:03:08.487 "params": { 00:03:08.487 "impl_name": "posix" 00:03:08.487 } 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "method": "sock_impl_set_options", 00:03:08.487 "params": { 00:03:08.487 "impl_name": "ssl", 00:03:08.487 "recv_buf_size": 4096, 00:03:08.487 "send_buf_size": 4096, 00:03:08.487 "enable_recv_pipe": true, 00:03:08.487 "enable_quickack": false, 00:03:08.487 "enable_placement_id": 0, 00:03:08.487 "enable_zerocopy_send_server": true, 00:03:08.487 "enable_zerocopy_send_client": false, 00:03:08.487 "zerocopy_threshold": 0, 00:03:08.487 "tls_version": 0, 00:03:08.487 "enable_ktls": false 00:03:08.487 } 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "method": "sock_impl_set_options", 00:03:08.487 "params": { 00:03:08.487 "impl_name": "posix", 00:03:08.487 "recv_buf_size": 2097152, 00:03:08.487 "send_buf_size": 2097152, 00:03:08.487 "enable_recv_pipe": true, 00:03:08.487 "enable_quickack": false, 00:03:08.487 "enable_placement_id": 0, 00:03:08.487 "enable_zerocopy_send_server": true, 00:03:08.487 "enable_zerocopy_send_client": false, 00:03:08.487 "zerocopy_threshold": 0, 00:03:08.487 "tls_version": 0, 00:03:08.487 "enable_ktls": false 00:03:08.487 } 00:03:08.487 } 00:03:08.487 ] 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "subsystem": "vmd", 00:03:08.487 "config": [] 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "subsystem": "accel", 00:03:08.487 "config": [ 00:03:08.487 { 00:03:08.487 "method": "accel_set_options", 00:03:08.487 "params": { 00:03:08.487 "small_cache_size": 128, 00:03:08.487 "large_cache_size": 16, 00:03:08.487 "task_count": 2048, 00:03:08.487 "sequence_count": 2048, 00:03:08.487 "buf_count": 2048 00:03:08.487 } 00:03:08.487 } 00:03:08.487 ] 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "subsystem": "bdev", 00:03:08.487 "config": [ 00:03:08.487 { 00:03:08.487 "method": "bdev_set_options", 00:03:08.487 "params": { 00:03:08.487 "bdev_io_pool_size": 65535, 00:03:08.487 "bdev_io_cache_size": 256, 00:03:08.487 "bdev_auto_examine": true, 00:03:08.487 "iobuf_small_cache_size": 128, 00:03:08.487 "iobuf_large_cache_size": 16 00:03:08.487 } 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "method": "bdev_raid_set_options", 00:03:08.487 "params": { 00:03:08.487 "process_window_size_kb": 1024, 00:03:08.487 "process_max_bandwidth_mb_sec": 0 00:03:08.487 } 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "method": "bdev_iscsi_set_options", 00:03:08.487 "params": { 00:03:08.487 "timeout_sec": 30 00:03:08.487 } 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "method": "bdev_nvme_set_options", 00:03:08.487 "params": { 00:03:08.487 "action_on_timeout": "none", 00:03:08.487 "timeout_us": 0, 00:03:08.487 "timeout_admin_us": 0, 00:03:08.487 "keep_alive_timeout_ms": 10000, 00:03:08.487 "arbitration_burst": 0, 00:03:08.487 "low_priority_weight": 0, 00:03:08.487 "medium_priority_weight": 0, 00:03:08.487 "high_priority_weight": 0, 00:03:08.487 "nvme_adminq_poll_period_us": 10000, 00:03:08.487 "nvme_ioq_poll_period_us": 0, 00:03:08.487 "io_queue_requests": 0, 00:03:08.487 "delay_cmd_submit": true, 00:03:08.487 "transport_retry_count": 4, 00:03:08.487 "bdev_retry_count": 3, 00:03:08.487 "transport_ack_timeout": 0, 00:03:08.487 "ctrlr_loss_timeout_sec": 0, 00:03:08.487 "reconnect_delay_sec": 0, 00:03:08.487 "fast_io_fail_timeout_sec": 0, 00:03:08.487 "disable_auto_failback": false, 00:03:08.487 "generate_uuids": false, 00:03:08.487 "transport_tos": 0, 00:03:08.487 "nvme_error_stat": false, 00:03:08.487 "rdma_srq_size": 0, 00:03:08.487 "io_path_stat": false, 00:03:08.487 "allow_accel_sequence": false, 00:03:08.487 "rdma_max_cq_size": 0, 00:03:08.487 "rdma_cm_event_timeout_ms": 0, 00:03:08.487 "dhchap_digests": [ 00:03:08.487 "sha256", 00:03:08.487 "sha384", 00:03:08.487 "sha512" 00:03:08.487 ], 00:03:08.487 "dhchap_dhgroups": [ 00:03:08.487 "null", 00:03:08.487 "ffdhe2048", 00:03:08.487 "ffdhe3072", 00:03:08.487 "ffdhe4096", 00:03:08.487 "ffdhe6144", 00:03:08.487 "ffdhe8192" 00:03:08.487 ] 00:03:08.487 } 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "method": "bdev_nvme_set_hotplug", 00:03:08.487 "params": { 00:03:08.487 "period_us": 100000, 00:03:08.487 "enable": false 00:03:08.487 } 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "method": "bdev_wait_for_examine" 00:03:08.487 } 00:03:08.487 ] 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "subsystem": "scsi", 00:03:08.487 "config": null 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "subsystem": "scheduler", 00:03:08.487 "config": [ 00:03:08.487 { 00:03:08.487 "method": "framework_set_scheduler", 00:03:08.487 "params": { 00:03:08.487 "name": "static" 00:03:08.487 } 00:03:08.487 } 00:03:08.487 ] 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "subsystem": "vhost_scsi", 00:03:08.487 "config": [] 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "subsystem": "vhost_blk", 00:03:08.487 "config": [] 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "subsystem": "ublk", 00:03:08.487 "config": [] 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "subsystem": "nbd", 00:03:08.487 "config": [] 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "subsystem": "nvmf", 00:03:08.487 "config": [ 00:03:08.487 { 00:03:08.487 "method": "nvmf_set_config", 00:03:08.487 "params": { 00:03:08.487 "discovery_filter": "match_any", 00:03:08.487 "admin_cmd_passthru": { 00:03:08.487 "identify_ctrlr": false 00:03:08.487 }, 00:03:08.487 "dhchap_digests": [ 00:03:08.487 "sha256", 00:03:08.487 "sha384", 00:03:08.487 "sha512" 00:03:08.487 ], 00:03:08.487 "dhchap_dhgroups": [ 00:03:08.487 "null", 00:03:08.487 "ffdhe2048", 00:03:08.487 "ffdhe3072", 00:03:08.487 "ffdhe4096", 00:03:08.487 "ffdhe6144", 00:03:08.487 "ffdhe8192" 00:03:08.487 ] 00:03:08.487 } 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "method": "nvmf_set_max_subsystems", 00:03:08.487 "params": { 00:03:08.487 "max_subsystems": 1024 00:03:08.487 } 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "method": "nvmf_set_crdt", 00:03:08.487 "params": { 00:03:08.487 "crdt1": 0, 00:03:08.487 "crdt2": 0, 00:03:08.487 "crdt3": 0 00:03:08.487 } 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "method": "nvmf_create_transport", 00:03:08.487 "params": { 00:03:08.487 "trtype": "TCP", 00:03:08.487 "max_queue_depth": 128, 00:03:08.487 "max_io_qpairs_per_ctrlr": 127, 00:03:08.487 "in_capsule_data_size": 4096, 00:03:08.487 "max_io_size": 131072, 00:03:08.487 "io_unit_size": 131072, 00:03:08.487 "max_aq_depth": 128, 00:03:08.487 "num_shared_buffers": 511, 00:03:08.487 "buf_cache_size": 4294967295, 00:03:08.487 "dif_insert_or_strip": false, 00:03:08.487 "zcopy": false, 00:03:08.487 "c2h_success": true, 00:03:08.487 "sock_priority": 0, 00:03:08.487 "abort_timeout_sec": 1, 00:03:08.487 "ack_timeout": 0, 00:03:08.487 "data_wr_pool_size": 0 00:03:08.487 } 00:03:08.487 } 00:03:08.487 ] 00:03:08.487 }, 00:03:08.487 { 00:03:08.487 "subsystem": "iscsi", 00:03:08.487 "config": [ 00:03:08.487 { 00:03:08.487 "method": "iscsi_set_options", 00:03:08.487 "params": { 00:03:08.487 "node_base": "iqn.2016-06.io.spdk", 00:03:08.487 "max_sessions": 128, 00:03:08.487 "max_connections_per_session": 2, 00:03:08.487 "max_queue_depth": 64, 00:03:08.487 "default_time2wait": 2, 00:03:08.487 "default_time2retain": 20, 00:03:08.487 "first_burst_length": 8192, 00:03:08.487 "immediate_data": true, 00:03:08.487 "allow_duplicated_isid": false, 00:03:08.487 "error_recovery_level": 0, 00:03:08.487 "nop_timeout": 60, 00:03:08.487 "nop_in_interval": 30, 00:03:08.487 "disable_chap": false, 00:03:08.487 "require_chap": false, 00:03:08.487 "mutual_chap": false, 00:03:08.487 "chap_group": 0, 00:03:08.487 "max_large_datain_per_connection": 64, 00:03:08.487 "max_r2t_per_connection": 4, 00:03:08.487 "pdu_pool_size": 36864, 00:03:08.487 "immediate_data_pool_size": 16384, 00:03:08.487 "data_out_pool_size": 2048 00:03:08.487 } 00:03:08.487 } 00:03:08.487 ] 00:03:08.488 } 00:03:08.488 ] 00:03:08.488 } 00:03:08.488 04:57:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:08.488 04:57:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3386192 00:03:08.488 04:57:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3386192 ']' 00:03:08.488 04:57:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3386192 00:03:08.488 04:57:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:08.488 04:57:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:08.488 04:57:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3386192 00:03:08.488 04:57:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:08.488 04:57:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:08.488 04:57:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3386192' 00:03:08.488 killing process with pid 3386192 00:03:08.488 04:57:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3386192 00:03:08.488 04:57:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3386192 00:03:09.052 04:57:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3386222 00:03:09.052 04:57:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:09.052 04:57:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:14.362 04:57:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3386222 00:03:14.362 04:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3386222 ']' 00:03:14.362 04:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3386222 00:03:14.362 04:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:14.362 04:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:14.362 04:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3386222 00:03:14.362 04:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:14.362 04:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:14.362 04:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3386222' 00:03:14.362 killing process with pid 3386222 00:03:14.362 04:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3386222 00:03:14.362 04:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3386222 00:03:14.362 04:57:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:14.362 04:57:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:14.362 00:03:14.362 real 0m6.358s 00:03:14.362 user 0m6.081s 00:03:14.362 sys 0m0.568s 00:03:14.362 04:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:14.362 04:57:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:14.362 ************************************ 00:03:14.362 END TEST skip_rpc_with_json 00:03:14.362 ************************************ 00:03:14.363 04:57:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:14.363 04:57:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:14.363 04:57:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:14.363 04:57:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:14.363 ************************************ 00:03:14.363 START TEST skip_rpc_with_delay 00:03:14.363 ************************************ 00:03:14.363 04:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:14.363 04:57:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:14.363 04:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:14.363 04:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:14.363 04:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:14.363 04:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:14.363 04:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:14.363 04:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:14.363 04:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:14.363 04:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:14.363 04:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:14.363 04:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:14.363 04:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:14.363 [2024-12-09 04:57:50.946721] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:14.363 04:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:14.363 04:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:14.363 04:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:14.363 04:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:14.363 00:03:14.363 real 0m0.067s 00:03:14.363 user 0m0.043s 00:03:14.363 sys 0m0.023s 00:03:14.363 04:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:14.363 04:57:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:14.363 ************************************ 00:03:14.363 END TEST skip_rpc_with_delay 00:03:14.363 ************************************ 00:03:14.363 04:57:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:14.363 04:57:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:14.363 04:57:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:14.363 04:57:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:14.363 04:57:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:14.363 04:57:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:14.625 ************************************ 00:03:14.625 START TEST exit_on_failed_rpc_init 00:03:14.625 ************************************ 00:03:14.625 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:14.625 04:57:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3387264 00:03:14.625 04:57:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3387264 00:03:14.625 04:57:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:14.625 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3387264 ']' 00:03:14.625 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:14.625 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:14.625 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:14.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:14.626 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:14.626 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:14.626 [2024-12-09 04:57:51.081627] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:03:14.626 [2024-12-09 04:57:51.081670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3387264 ] 00:03:14.626 [2024-12-09 04:57:51.146399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:14.626 [2024-12-09 04:57:51.189083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:14.885 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:14.885 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:14.885 04:57:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:14.885 04:57:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:14.885 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:14.885 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:14.885 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:14.885 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:14.885 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:14.885 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:14.885 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:14.886 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:14.886 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:14.886 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:14.886 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:14.886 [2024-12-09 04:57:51.462103] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:03:14.886 [2024-12-09 04:57:51.462150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3387424 ] 00:03:14.886 [2024-12-09 04:57:51.526077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:15.144 [2024-12-09 04:57:51.567330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:15.145 [2024-12-09 04:57:51.567385] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:15.145 [2024-12-09 04:57:51.567395] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:15.145 [2024-12-09 04:57:51.567404] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:15.145 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:15.145 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:15.145 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:15.145 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:15.145 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:15.145 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:15.145 04:57:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:15.145 04:57:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3387264 00:03:15.145 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3387264 ']' 00:03:15.145 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3387264 00:03:15.145 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:15.145 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:15.145 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3387264 00:03:15.145 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:15.145 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:15.145 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3387264' 00:03:15.145 killing process with pid 3387264 00:03:15.145 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3387264 00:03:15.145 04:57:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3387264 00:03:15.404 00:03:15.404 real 0m1.002s 00:03:15.404 user 0m1.109s 00:03:15.404 sys 0m0.369s 00:03:15.404 04:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:15.404 04:57:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:15.404 ************************************ 00:03:15.404 END TEST exit_on_failed_rpc_init 00:03:15.404 ************************************ 00:03:15.663 04:57:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:15.663 00:03:15.663 real 0m13.286s 00:03:15.663 user 0m12.603s 00:03:15.663 sys 0m1.517s 00:03:15.663 04:57:52 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:15.663 04:57:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:15.663 ************************************ 00:03:15.663 END TEST skip_rpc 00:03:15.663 ************************************ 00:03:15.663 04:57:52 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:15.663 04:57:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:15.663 04:57:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:15.663 04:57:52 -- common/autotest_common.sh@10 -- # set +x 00:03:15.663 ************************************ 00:03:15.663 START TEST rpc_client 00:03:15.663 ************************************ 00:03:15.663 04:57:52 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:15.663 * Looking for test storage... 00:03:15.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:15.663 04:57:52 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:15.663 04:57:52 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:03:15.663 04:57:52 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:15.663 04:57:52 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:15.663 04:57:52 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:15.663 04:57:52 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:15.663 04:57:52 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:15.663 04:57:52 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:15.663 04:57:52 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:15.663 04:57:52 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:15.663 04:57:52 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:15.663 04:57:52 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:15.663 04:57:52 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:15.663 04:57:52 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:15.663 04:57:52 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:15.663 04:57:52 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:15.663 04:57:52 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:15.663 04:57:52 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:15.663 04:57:52 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:15.663 04:57:52 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:15.922 04:57:52 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:15.922 04:57:52 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:15.922 04:57:52 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:15.922 04:57:52 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:15.922 04:57:52 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:15.922 04:57:52 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:15.922 04:57:52 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:15.922 04:57:52 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:15.922 04:57:52 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:15.922 04:57:52 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:15.922 04:57:52 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:15.922 04:57:52 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:15.922 04:57:52 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:15.922 04:57:52 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:15.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.922 --rc genhtml_branch_coverage=1 00:03:15.922 --rc genhtml_function_coverage=1 00:03:15.922 --rc genhtml_legend=1 00:03:15.922 --rc geninfo_all_blocks=1 00:03:15.922 --rc geninfo_unexecuted_blocks=1 00:03:15.922 00:03:15.922 ' 00:03:15.922 04:57:52 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:15.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.922 --rc genhtml_branch_coverage=1 00:03:15.922 --rc genhtml_function_coverage=1 00:03:15.922 --rc genhtml_legend=1 00:03:15.922 --rc geninfo_all_blocks=1 00:03:15.922 --rc geninfo_unexecuted_blocks=1 00:03:15.922 00:03:15.922 ' 00:03:15.922 04:57:52 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:15.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.922 --rc genhtml_branch_coverage=1 00:03:15.922 --rc genhtml_function_coverage=1 00:03:15.922 --rc genhtml_legend=1 00:03:15.922 --rc geninfo_all_blocks=1 00:03:15.922 --rc geninfo_unexecuted_blocks=1 00:03:15.922 00:03:15.922 ' 00:03:15.922 04:57:52 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:15.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.922 --rc genhtml_branch_coverage=1 00:03:15.922 --rc genhtml_function_coverage=1 00:03:15.922 --rc genhtml_legend=1 00:03:15.922 --rc geninfo_all_blocks=1 00:03:15.922 --rc geninfo_unexecuted_blocks=1 00:03:15.922 00:03:15.922 ' 00:03:15.922 04:57:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:15.922 OK 00:03:15.922 04:57:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:15.922 00:03:15.922 real 0m0.200s 00:03:15.922 user 0m0.117s 00:03:15.922 sys 0m0.097s 00:03:15.922 04:57:52 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:15.922 04:57:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:15.922 ************************************ 00:03:15.922 END TEST rpc_client 00:03:15.922 ************************************ 00:03:15.922 04:57:52 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:15.922 04:57:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:15.922 04:57:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:15.922 04:57:52 -- common/autotest_common.sh@10 -- # set +x 00:03:15.922 ************************************ 00:03:15.922 START TEST json_config 00:03:15.922 ************************************ 00:03:15.922 04:57:52 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:15.922 04:57:52 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:15.922 04:57:52 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:03:15.922 04:57:52 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:15.922 04:57:52 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:15.923 04:57:52 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:15.923 04:57:52 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:15.923 04:57:52 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:15.923 04:57:52 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:15.923 04:57:52 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:15.923 04:57:52 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:15.923 04:57:52 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:15.923 04:57:52 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:15.923 04:57:52 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:15.923 04:57:52 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:15.923 04:57:52 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:15.923 04:57:52 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:15.923 04:57:52 json_config -- scripts/common.sh@345 -- # : 1 00:03:15.923 04:57:52 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:15.923 04:57:52 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:15.923 04:57:52 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:15.923 04:57:52 json_config -- scripts/common.sh@353 -- # local d=1 00:03:15.923 04:57:52 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:15.923 04:57:52 json_config -- scripts/common.sh@355 -- # echo 1 00:03:15.923 04:57:52 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:15.923 04:57:52 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:15.923 04:57:52 json_config -- scripts/common.sh@353 -- # local d=2 00:03:15.923 04:57:52 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:15.923 04:57:52 json_config -- scripts/common.sh@355 -- # echo 2 00:03:15.923 04:57:52 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:15.923 04:57:52 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:15.923 04:57:52 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:15.923 04:57:52 json_config -- scripts/common.sh@368 -- # return 0 00:03:15.923 04:57:52 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:15.923 04:57:52 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:15.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.923 --rc genhtml_branch_coverage=1 00:03:15.923 --rc genhtml_function_coverage=1 00:03:15.923 --rc genhtml_legend=1 00:03:15.923 --rc geninfo_all_blocks=1 00:03:15.923 --rc geninfo_unexecuted_blocks=1 00:03:15.923 00:03:15.923 ' 00:03:15.923 04:57:52 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:15.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.923 --rc genhtml_branch_coverage=1 00:03:15.923 --rc genhtml_function_coverage=1 00:03:15.923 --rc genhtml_legend=1 00:03:15.923 --rc geninfo_all_blocks=1 00:03:15.923 --rc geninfo_unexecuted_blocks=1 00:03:15.923 00:03:15.923 ' 00:03:15.923 04:57:52 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:15.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.923 --rc genhtml_branch_coverage=1 00:03:15.923 --rc genhtml_function_coverage=1 00:03:15.923 --rc genhtml_legend=1 00:03:15.923 --rc geninfo_all_blocks=1 00:03:15.923 --rc geninfo_unexecuted_blocks=1 00:03:15.923 00:03:15.923 ' 00:03:15.923 04:57:52 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:15.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.923 --rc genhtml_branch_coverage=1 00:03:15.923 --rc genhtml_function_coverage=1 00:03:15.923 --rc genhtml_legend=1 00:03:15.923 --rc geninfo_all_blocks=1 00:03:15.923 --rc geninfo_unexecuted_blocks=1 00:03:15.923 00:03:15.923 ' 00:03:15.923 04:57:52 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:15.923 04:57:52 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:15.923 04:57:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:15.923 04:57:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:15.923 04:57:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:15.923 04:57:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:15.923 04:57:52 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:15.923 04:57:52 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:15.923 04:57:52 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:15.923 04:57:52 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:15.923 04:57:52 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:16.183 04:57:52 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:16.183 04:57:52 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:16.183 04:57:52 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:16.183 04:57:52 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:16.183 04:57:52 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:16.183 04:57:52 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:16.183 04:57:52 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:16.183 04:57:52 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:16.183 04:57:52 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:16.183 04:57:52 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:16.183 04:57:52 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:16.183 04:57:52 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:16.183 04:57:52 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.183 04:57:52 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.183 04:57:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.183 04:57:52 json_config -- paths/export.sh@5 -- # export PATH 00:03:16.183 04:57:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.183 04:57:52 json_config -- nvmf/common.sh@51 -- # : 0 00:03:16.183 04:57:52 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:16.183 04:57:52 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:16.183 04:57:52 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:16.183 04:57:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:16.183 04:57:52 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:16.183 04:57:52 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:16.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:16.183 04:57:52 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:16.183 04:57:52 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:16.183 04:57:52 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:16.183 04:57:52 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:16.183 04:57:52 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:16.183 04:57:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:16.183 04:57:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:16.183 04:57:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:16.183 04:57:52 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:16.183 04:57:52 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:16.183 04:57:52 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:16.183 04:57:52 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:16.183 04:57:52 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:16.183 04:57:52 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:16.183 04:57:52 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:16.183 04:57:52 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:16.183 04:57:52 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:16.183 04:57:52 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:16.183 04:57:52 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:16.183 INFO: JSON configuration test init 00:03:16.183 04:57:52 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:16.183 04:57:52 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:16.183 04:57:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:16.183 04:57:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:16.183 04:57:52 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:16.183 04:57:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:16.183 04:57:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:16.183 04:57:52 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:16.183 04:57:52 json_config -- json_config/common.sh@9 -- # local app=target 00:03:16.183 04:57:52 json_config -- json_config/common.sh@10 -- # shift 00:03:16.183 04:57:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:16.183 04:57:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:16.183 04:57:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:16.183 04:57:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:16.183 04:57:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:16.183 04:57:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3387778 00:03:16.183 04:57:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:16.183 Waiting for target to run... 00:03:16.183 04:57:52 json_config -- json_config/common.sh@25 -- # waitforlisten 3387778 /var/tmp/spdk_tgt.sock 00:03:16.183 04:57:52 json_config -- common/autotest_common.sh@835 -- # '[' -z 3387778 ']' 00:03:16.183 04:57:52 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:16.183 04:57:52 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:16.183 04:57:52 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:16.183 04:57:52 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:16.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:16.183 04:57:52 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:16.183 04:57:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:16.183 [2024-12-09 04:57:52.661132] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:03:16.183 [2024-12-09 04:57:52.661180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3387778 ] 00:03:16.750 [2024-12-09 04:57:53.100447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:16.750 [2024-12-09 04:57:53.158232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:17.010 04:57:53 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:17.010 04:57:53 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:17.010 04:57:53 json_config -- json_config/common.sh@26 -- # echo '' 00:03:17.010 00:03:17.010 04:57:53 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:17.010 04:57:53 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:17.010 04:57:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:17.010 04:57:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:17.010 04:57:53 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:17.010 04:57:53 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:17.010 04:57:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:17.010 04:57:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:17.010 04:57:53 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:17.010 04:57:53 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:17.010 04:57:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:20.314 04:57:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:20.314 04:57:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:20.314 04:57:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@54 -- # sort 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:20.314 04:57:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:20.314 04:57:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:20.314 04:57:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:20.314 04:57:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:20.314 04:57:56 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:20.314 04:57:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:20.573 MallocForNvmf0 00:03:20.573 04:57:57 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:20.573 04:57:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:20.573 MallocForNvmf1 00:03:20.832 04:57:57 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:20.832 04:57:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:20.832 [2024-12-09 04:57:57.393960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:20.832 04:57:57 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:20.832 04:57:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:21.090 04:57:57 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:21.090 04:57:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:21.348 04:57:57 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:21.348 04:57:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:21.348 04:57:57 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:21.348 04:57:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:21.606 [2024-12-09 04:57:58.144293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:21.606 04:57:58 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:21.606 04:57:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:21.606 04:57:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:21.606 04:57:58 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:21.606 04:57:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:21.606 04:57:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:21.606 04:57:58 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:21.607 04:57:58 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:21.607 04:57:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:21.865 MallocBdevForConfigChangeCheck 00:03:21.865 04:57:58 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:21.865 04:57:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:21.865 04:57:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:21.865 04:57:58 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:21.865 04:57:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:22.431 04:57:58 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:22.431 INFO: shutting down applications... 00:03:22.431 04:57:58 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:22.431 04:57:58 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:22.431 04:57:58 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:22.431 04:57:58 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:23.806 Calling clear_iscsi_subsystem 00:03:23.806 Calling clear_nvmf_subsystem 00:03:23.806 Calling clear_nbd_subsystem 00:03:23.806 Calling clear_ublk_subsystem 00:03:23.806 Calling clear_vhost_blk_subsystem 00:03:23.806 Calling clear_vhost_scsi_subsystem 00:03:23.806 Calling clear_bdev_subsystem 00:03:23.806 04:58:00 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:23.806 04:58:00 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:23.806 04:58:00 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:23.806 04:58:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:23.806 04:58:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:23.806 04:58:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:24.373 04:58:00 json_config -- json_config/json_config.sh@352 -- # break 00:03:24.373 04:58:00 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:24.373 04:58:00 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:24.373 04:58:00 json_config -- json_config/common.sh@31 -- # local app=target 00:03:24.373 04:58:00 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:24.373 04:58:00 json_config -- json_config/common.sh@35 -- # [[ -n 3387778 ]] 00:03:24.373 04:58:00 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3387778 00:03:24.373 04:58:00 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:24.373 04:58:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:24.373 04:58:00 json_config -- json_config/common.sh@41 -- # kill -0 3387778 00:03:24.373 04:58:00 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:24.630 04:58:01 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:24.630 04:58:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:24.630 04:58:01 json_config -- json_config/common.sh@41 -- # kill -0 3387778 00:03:24.630 04:58:01 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:24.630 04:58:01 json_config -- json_config/common.sh@43 -- # break 00:03:24.630 04:58:01 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:24.630 04:58:01 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:24.630 SPDK target shutdown done 00:03:24.630 04:58:01 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:24.630 INFO: relaunching applications... 00:03:24.630 04:58:01 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:24.630 04:58:01 json_config -- json_config/common.sh@9 -- # local app=target 00:03:24.630 04:58:01 json_config -- json_config/common.sh@10 -- # shift 00:03:24.630 04:58:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:24.630 04:58:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:24.630 04:58:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:24.630 04:58:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:24.630 04:58:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:24.630 04:58:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3389300 00:03:24.630 04:58:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:24.630 Waiting for target to run... 00:03:24.630 04:58:01 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:24.630 04:58:01 json_config -- json_config/common.sh@25 -- # waitforlisten 3389300 /var/tmp/spdk_tgt.sock 00:03:24.630 04:58:01 json_config -- common/autotest_common.sh@835 -- # '[' -z 3389300 ']' 00:03:24.630 04:58:01 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:24.630 04:58:01 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:24.630 04:58:01 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:24.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:24.630 04:58:01 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:24.630 04:58:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:24.887 [2024-12-09 04:58:01.292255] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:03:24.887 [2024-12-09 04:58:01.292312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3389300 ] 00:03:25.144 [2024-12-09 04:58:01.573893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:25.144 [2024-12-09 04:58:01.608177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:28.416 [2024-12-09 04:58:04.643760] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:28.416 [2024-12-09 04:58:04.676132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:28.416 04:58:04 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:28.416 04:58:04 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:28.416 04:58:04 json_config -- json_config/common.sh@26 -- # echo '' 00:03:28.416 00:03:28.416 04:58:04 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:03:28.416 04:58:04 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:28.416 INFO: Checking if target configuration is the same... 00:03:28.416 04:58:04 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:03:28.416 04:58:04 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:28.416 04:58:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:28.416 + '[' 2 -ne 2 ']' 00:03:28.416 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:28.416 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:28.416 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:28.416 +++ basename /dev/fd/62 00:03:28.416 ++ mktemp /tmp/62.XXX 00:03:28.416 + tmp_file_1=/tmp/62.c18 00:03:28.416 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:28.416 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:28.416 + tmp_file_2=/tmp/spdk_tgt_config.json.rPM 00:03:28.416 + ret=0 00:03:28.416 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:28.416 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:28.673 + diff -u /tmp/62.c18 /tmp/spdk_tgt_config.json.rPM 00:03:28.673 + echo 'INFO: JSON config files are the same' 00:03:28.673 INFO: JSON config files are the same 00:03:28.673 + rm /tmp/62.c18 /tmp/spdk_tgt_config.json.rPM 00:03:28.673 + exit 0 00:03:28.673 04:58:05 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:03:28.673 04:58:05 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:28.673 INFO: changing configuration and checking if this can be detected... 00:03:28.673 04:58:05 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:28.673 04:58:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:28.673 04:58:05 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:28.673 04:58:05 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:03:28.673 04:58:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:28.673 + '[' 2 -ne 2 ']' 00:03:28.673 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:28.673 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:28.673 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:28.673 +++ basename /dev/fd/62 00:03:28.673 ++ mktemp /tmp/62.XXX 00:03:28.673 + tmp_file_1=/tmp/62.f3t 00:03:28.673 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:28.673 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:28.931 + tmp_file_2=/tmp/spdk_tgt_config.json.DlK 00:03:28.931 + ret=0 00:03:28.931 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:29.189 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:29.189 + diff -u /tmp/62.f3t /tmp/spdk_tgt_config.json.DlK 00:03:29.189 + ret=1 00:03:29.189 + echo '=== Start of file: /tmp/62.f3t ===' 00:03:29.189 + cat /tmp/62.f3t 00:03:29.189 + echo '=== End of file: /tmp/62.f3t ===' 00:03:29.189 + echo '' 00:03:29.189 + echo '=== Start of file: /tmp/spdk_tgt_config.json.DlK ===' 00:03:29.189 + cat /tmp/spdk_tgt_config.json.DlK 00:03:29.189 + echo '=== End of file: /tmp/spdk_tgt_config.json.DlK ===' 00:03:29.189 + echo '' 00:03:29.189 + rm /tmp/62.f3t /tmp/spdk_tgt_config.json.DlK 00:03:29.189 + exit 1 00:03:29.189 04:58:05 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:03:29.189 INFO: configuration change detected. 00:03:29.189 04:58:05 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:03:29.189 04:58:05 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:03:29.189 04:58:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:29.189 04:58:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:29.189 04:58:05 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:03:29.189 04:58:05 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:03:29.189 04:58:05 json_config -- json_config/json_config.sh@324 -- # [[ -n 3389300 ]] 00:03:29.189 04:58:05 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:03:29.189 04:58:05 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:03:29.189 04:58:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:29.189 04:58:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:29.189 04:58:05 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:03:29.189 04:58:05 json_config -- json_config/json_config.sh@200 -- # uname -s 00:03:29.189 04:58:05 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:03:29.189 04:58:05 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:03:29.189 04:58:05 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:03:29.189 04:58:05 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:03:29.189 04:58:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:29.189 04:58:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:29.189 04:58:05 json_config -- json_config/json_config.sh@330 -- # killprocess 3389300 00:03:29.189 04:58:05 json_config -- common/autotest_common.sh@954 -- # '[' -z 3389300 ']' 00:03:29.189 04:58:05 json_config -- common/autotest_common.sh@958 -- # kill -0 3389300 00:03:29.189 04:58:05 json_config -- common/autotest_common.sh@959 -- # uname 00:03:29.189 04:58:05 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:29.189 04:58:05 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3389300 00:03:29.189 04:58:05 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:29.189 04:58:05 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:29.189 04:58:05 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3389300' 00:03:29.189 killing process with pid 3389300 00:03:29.189 04:58:05 json_config -- common/autotest_common.sh@973 -- # kill 3389300 00:03:29.189 04:58:05 json_config -- common/autotest_common.sh@978 -- # wait 3389300 00:03:31.090 04:58:07 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:31.090 04:58:07 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:03:31.090 04:58:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:31.090 04:58:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:31.090 04:58:07 json_config -- json_config/json_config.sh@335 -- # return 0 00:03:31.090 04:58:07 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:03:31.090 INFO: Success 00:03:31.090 00:03:31.090 real 0m14.958s 00:03:31.090 user 0m15.413s 00:03:31.090 sys 0m2.493s 00:03:31.090 04:58:07 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:31.090 04:58:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:31.090 ************************************ 00:03:31.090 END TEST json_config 00:03:31.090 ************************************ 00:03:31.090 04:58:07 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:31.090 04:58:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:31.090 04:58:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:31.090 04:58:07 -- common/autotest_common.sh@10 -- # set +x 00:03:31.090 ************************************ 00:03:31.090 START TEST json_config_extra_key 00:03:31.090 ************************************ 00:03:31.090 04:58:07 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:31.090 04:58:07 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:31.090 04:58:07 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:03:31.090 04:58:07 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:31.090 04:58:07 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:31.090 04:58:07 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:31.090 04:58:07 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:31.090 04:58:07 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:31.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.090 --rc genhtml_branch_coverage=1 00:03:31.090 --rc genhtml_function_coverage=1 00:03:31.090 --rc genhtml_legend=1 00:03:31.090 --rc geninfo_all_blocks=1 00:03:31.090 --rc geninfo_unexecuted_blocks=1 00:03:31.090 00:03:31.090 ' 00:03:31.090 04:58:07 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:31.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.090 --rc genhtml_branch_coverage=1 00:03:31.090 --rc genhtml_function_coverage=1 00:03:31.090 --rc genhtml_legend=1 00:03:31.090 --rc geninfo_all_blocks=1 00:03:31.090 --rc geninfo_unexecuted_blocks=1 00:03:31.090 00:03:31.090 ' 00:03:31.090 04:58:07 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:31.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.090 --rc genhtml_branch_coverage=1 00:03:31.090 --rc genhtml_function_coverage=1 00:03:31.090 --rc genhtml_legend=1 00:03:31.090 --rc geninfo_all_blocks=1 00:03:31.090 --rc geninfo_unexecuted_blocks=1 00:03:31.090 00:03:31.090 ' 00:03:31.091 04:58:07 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:31.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.091 --rc genhtml_branch_coverage=1 00:03:31.091 --rc genhtml_function_coverage=1 00:03:31.091 --rc genhtml_legend=1 00:03:31.091 --rc geninfo_all_blocks=1 00:03:31.091 --rc geninfo_unexecuted_blocks=1 00:03:31.091 00:03:31.091 ' 00:03:31.091 04:58:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:31.091 04:58:07 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:31.091 04:58:07 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:31.091 04:58:07 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:31.091 04:58:07 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:31.091 04:58:07 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.091 04:58:07 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.091 04:58:07 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.091 04:58:07 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:31.091 04:58:07 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:31.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:31.091 04:58:07 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:31.091 04:58:07 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:31.091 04:58:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:31.091 04:58:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:31.091 04:58:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:31.091 04:58:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:31.091 04:58:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:31.091 04:58:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:31.091 04:58:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:31.091 04:58:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:31.091 04:58:07 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:31.091 04:58:07 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:31.091 INFO: launching applications... 00:03:31.091 04:58:07 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:31.091 04:58:07 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:31.091 04:58:07 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:31.091 04:58:07 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:31.091 04:58:07 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:31.091 04:58:07 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:31.091 04:58:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:31.091 04:58:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:31.091 04:58:07 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3390568 00:03:31.091 04:58:07 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:31.091 Waiting for target to run... 00:03:31.091 04:58:07 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3390568 /var/tmp/spdk_tgt.sock 00:03:31.091 04:58:07 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3390568 ']' 00:03:31.091 04:58:07 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:31.091 04:58:07 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:31.091 04:58:07 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:31.091 04:58:07 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:31.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:31.091 04:58:07 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:31.091 04:58:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:31.091 [2024-12-09 04:58:07.676157] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:03:31.091 [2024-12-09 04:58:07.676209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3390568 ] 00:03:31.657 [2024-12-09 04:58:08.118194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:31.657 [2024-12-09 04:58:08.173622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:31.915 04:58:08 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:31.915 04:58:08 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:03:31.915 04:58:08 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:31.915 00:03:31.915 04:58:08 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:31.915 INFO: shutting down applications... 00:03:31.915 04:58:08 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:31.915 04:58:08 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:31.915 04:58:08 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:31.915 04:58:08 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3390568 ]] 00:03:31.915 04:58:08 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3390568 00:03:31.915 04:58:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:31.915 04:58:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:31.915 04:58:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3390568 00:03:31.915 04:58:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:32.482 04:58:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:32.482 04:58:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:32.482 04:58:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3390568 00:03:32.482 04:58:09 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:32.482 04:58:09 json_config_extra_key -- json_config/common.sh@43 -- # break 00:03:32.482 04:58:09 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:32.482 04:58:09 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:32.482 SPDK target shutdown done 00:03:32.482 04:58:09 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:32.482 Success 00:03:32.482 00:03:32.482 real 0m1.572s 00:03:32.482 user 0m1.246s 00:03:32.482 sys 0m0.550s 00:03:32.482 04:58:09 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:32.482 04:58:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:32.482 ************************************ 00:03:32.482 END TEST json_config_extra_key 00:03:32.482 ************************************ 00:03:32.482 04:58:09 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:32.482 04:58:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:32.482 04:58:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:32.482 04:58:09 -- common/autotest_common.sh@10 -- # set +x 00:03:32.482 ************************************ 00:03:32.482 START TEST alias_rpc 00:03:32.482 ************************************ 00:03:32.482 04:58:09 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:32.741 * Looking for test storage... 00:03:32.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:03:32.741 04:58:09 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:32.741 04:58:09 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:32.741 04:58:09 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:32.741 04:58:09 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@345 -- # : 1 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:03:32.741 04:58:09 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:32.742 04:58:09 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:03:32.742 04:58:09 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:32.742 04:58:09 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:32.742 04:58:09 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:32.742 04:58:09 alias_rpc -- scripts/common.sh@368 -- # return 0 00:03:32.742 04:58:09 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:32.742 04:58:09 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:32.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.742 --rc genhtml_branch_coverage=1 00:03:32.742 --rc genhtml_function_coverage=1 00:03:32.742 --rc genhtml_legend=1 00:03:32.742 --rc geninfo_all_blocks=1 00:03:32.742 --rc geninfo_unexecuted_blocks=1 00:03:32.742 00:03:32.742 ' 00:03:32.742 04:58:09 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:32.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.742 --rc genhtml_branch_coverage=1 00:03:32.742 --rc genhtml_function_coverage=1 00:03:32.742 --rc genhtml_legend=1 00:03:32.742 --rc geninfo_all_blocks=1 00:03:32.742 --rc geninfo_unexecuted_blocks=1 00:03:32.742 00:03:32.742 ' 00:03:32.742 04:58:09 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:32.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.742 --rc genhtml_branch_coverage=1 00:03:32.742 --rc genhtml_function_coverage=1 00:03:32.742 --rc genhtml_legend=1 00:03:32.742 --rc geninfo_all_blocks=1 00:03:32.742 --rc geninfo_unexecuted_blocks=1 00:03:32.742 00:03:32.742 ' 00:03:32.742 04:58:09 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:32.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.742 --rc genhtml_branch_coverage=1 00:03:32.742 --rc genhtml_function_coverage=1 00:03:32.742 --rc genhtml_legend=1 00:03:32.742 --rc geninfo_all_blocks=1 00:03:32.742 --rc geninfo_unexecuted_blocks=1 00:03:32.742 00:03:32.742 ' 00:03:32.742 04:58:09 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:32.742 04:58:09 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3390858 00:03:32.742 04:58:09 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3390858 00:03:32.742 04:58:09 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:32.742 04:58:09 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3390858 ']' 00:03:32.742 04:58:09 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:32.742 04:58:09 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:32.742 04:58:09 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:32.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:32.742 04:58:09 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:32.742 04:58:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.742 [2024-12-09 04:58:09.294272] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:03:32.742 [2024-12-09 04:58:09.294319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3390858 ] 00:03:32.742 [2024-12-09 04:58:09.358069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:33.000 [2024-12-09 04:58:09.399253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:33.000 04:58:09 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:33.000 04:58:09 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:03:33.000 04:58:09 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:03:33.259 04:58:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3390858 00:03:33.259 04:58:09 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3390858 ']' 00:03:33.259 04:58:09 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3390858 00:03:33.259 04:58:09 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:03:33.259 04:58:09 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:33.259 04:58:09 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3390858 00:03:33.259 04:58:09 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:33.259 04:58:09 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:33.259 04:58:09 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3390858' 00:03:33.259 killing process with pid 3390858 00:03:33.259 04:58:09 alias_rpc -- common/autotest_common.sh@973 -- # kill 3390858 00:03:33.259 04:58:09 alias_rpc -- common/autotest_common.sh@978 -- # wait 3390858 00:03:33.826 00:03:33.826 real 0m1.140s 00:03:33.826 user 0m1.180s 00:03:33.826 sys 0m0.367s 00:03:33.826 04:58:10 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:33.826 04:58:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:33.826 ************************************ 00:03:33.826 END TEST alias_rpc 00:03:33.826 ************************************ 00:03:33.826 04:58:10 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:03:33.826 04:58:10 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:33.826 04:58:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.826 04:58:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.826 04:58:10 -- common/autotest_common.sh@10 -- # set +x 00:03:33.826 ************************************ 00:03:33.826 START TEST spdkcli_tcp 00:03:33.826 ************************************ 00:03:33.826 04:58:10 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:33.826 * Looking for test storage... 00:03:33.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:03:33.826 04:58:10 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:33.826 04:58:10 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:03:33.826 04:58:10 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:33.826 04:58:10 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:33.826 04:58:10 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:03:33.826 04:58:10 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.826 04:58:10 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:33.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.826 --rc genhtml_branch_coverage=1 00:03:33.826 --rc genhtml_function_coverage=1 00:03:33.826 --rc genhtml_legend=1 00:03:33.826 --rc geninfo_all_blocks=1 00:03:33.826 --rc geninfo_unexecuted_blocks=1 00:03:33.826 00:03:33.826 ' 00:03:33.826 04:58:10 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:33.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.826 --rc genhtml_branch_coverage=1 00:03:33.826 --rc genhtml_function_coverage=1 00:03:33.826 --rc genhtml_legend=1 00:03:33.826 --rc geninfo_all_blocks=1 00:03:33.826 --rc geninfo_unexecuted_blocks=1 00:03:33.826 00:03:33.826 ' 00:03:33.826 04:58:10 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:33.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.826 --rc genhtml_branch_coverage=1 00:03:33.826 --rc genhtml_function_coverage=1 00:03:33.826 --rc genhtml_legend=1 00:03:33.826 --rc geninfo_all_blocks=1 00:03:33.826 --rc geninfo_unexecuted_blocks=1 00:03:33.826 00:03:33.826 ' 00:03:33.826 04:58:10 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:33.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.826 --rc genhtml_branch_coverage=1 00:03:33.826 --rc genhtml_function_coverage=1 00:03:33.826 --rc genhtml_legend=1 00:03:33.826 --rc geninfo_all_blocks=1 00:03:33.826 --rc geninfo_unexecuted_blocks=1 00:03:33.826 00:03:33.826 ' 00:03:33.826 04:58:10 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:03:33.826 04:58:10 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:03:33.826 04:58:10 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:03:33.826 04:58:10 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:03:33.826 04:58:10 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:03:33.826 04:58:10 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:03:33.826 04:58:10 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:03:33.826 04:58:10 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:33.826 04:58:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:33.826 04:58:10 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3391147 00:03:33.826 04:58:10 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:03:33.826 04:58:10 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3391147 00:03:33.826 04:58:10 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3391147 ']' 00:03:33.826 04:58:10 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:33.826 04:58:10 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:33.826 04:58:10 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:33.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:33.826 04:58:10 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:33.826 04:58:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:34.084 [2024-12-09 04:58:10.510324] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:03:34.084 [2024-12-09 04:58:10.510374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3391147 ] 00:03:34.084 [2024-12-09 04:58:10.576405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:34.084 [2024-12-09 04:58:10.618854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:34.084 [2024-12-09 04:58:10.618858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:34.342 04:58:10 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:34.342 04:58:10 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:03:34.342 04:58:10 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3391154 00:03:34.342 04:58:10 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:03:34.342 04:58:10 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:03:34.601 [ 00:03:34.601 "bdev_malloc_delete", 00:03:34.601 "bdev_malloc_create", 00:03:34.601 "bdev_null_resize", 00:03:34.601 "bdev_null_delete", 00:03:34.601 "bdev_null_create", 00:03:34.601 "bdev_nvme_cuse_unregister", 00:03:34.601 "bdev_nvme_cuse_register", 00:03:34.601 "bdev_opal_new_user", 00:03:34.601 "bdev_opal_set_lock_state", 00:03:34.601 "bdev_opal_delete", 00:03:34.601 "bdev_opal_get_info", 00:03:34.601 "bdev_opal_create", 00:03:34.601 "bdev_nvme_opal_revert", 00:03:34.601 "bdev_nvme_opal_init", 00:03:34.601 "bdev_nvme_send_cmd", 00:03:34.601 "bdev_nvme_set_keys", 00:03:34.601 "bdev_nvme_get_path_iostat", 00:03:34.601 "bdev_nvme_get_mdns_discovery_info", 00:03:34.601 "bdev_nvme_stop_mdns_discovery", 00:03:34.601 "bdev_nvme_start_mdns_discovery", 00:03:34.601 "bdev_nvme_set_multipath_policy", 00:03:34.601 "bdev_nvme_set_preferred_path", 00:03:34.601 "bdev_nvme_get_io_paths", 00:03:34.601 "bdev_nvme_remove_error_injection", 00:03:34.601 "bdev_nvme_add_error_injection", 00:03:34.601 "bdev_nvme_get_discovery_info", 00:03:34.601 "bdev_nvme_stop_discovery", 00:03:34.601 "bdev_nvme_start_discovery", 00:03:34.601 "bdev_nvme_get_controller_health_info", 00:03:34.601 "bdev_nvme_disable_controller", 00:03:34.601 "bdev_nvme_enable_controller", 00:03:34.601 "bdev_nvme_reset_controller", 00:03:34.601 "bdev_nvme_get_transport_statistics", 00:03:34.601 "bdev_nvme_apply_firmware", 00:03:34.601 "bdev_nvme_detach_controller", 00:03:34.601 "bdev_nvme_get_controllers", 00:03:34.601 "bdev_nvme_attach_controller", 00:03:34.601 "bdev_nvme_set_hotplug", 00:03:34.601 "bdev_nvme_set_options", 00:03:34.601 "bdev_passthru_delete", 00:03:34.601 "bdev_passthru_create", 00:03:34.601 "bdev_lvol_set_parent_bdev", 00:03:34.601 "bdev_lvol_set_parent", 00:03:34.601 "bdev_lvol_check_shallow_copy", 00:03:34.601 "bdev_lvol_start_shallow_copy", 00:03:34.601 "bdev_lvol_grow_lvstore", 00:03:34.601 "bdev_lvol_get_lvols", 00:03:34.602 "bdev_lvol_get_lvstores", 00:03:34.602 "bdev_lvol_delete", 00:03:34.602 "bdev_lvol_set_read_only", 00:03:34.602 "bdev_lvol_resize", 00:03:34.602 "bdev_lvol_decouple_parent", 00:03:34.602 "bdev_lvol_inflate", 00:03:34.602 "bdev_lvol_rename", 00:03:34.602 "bdev_lvol_clone_bdev", 00:03:34.602 "bdev_lvol_clone", 00:03:34.602 "bdev_lvol_snapshot", 00:03:34.602 "bdev_lvol_create", 00:03:34.602 "bdev_lvol_delete_lvstore", 00:03:34.602 "bdev_lvol_rename_lvstore", 00:03:34.602 "bdev_lvol_create_lvstore", 00:03:34.602 "bdev_raid_set_options", 00:03:34.602 "bdev_raid_remove_base_bdev", 00:03:34.602 "bdev_raid_add_base_bdev", 00:03:34.602 "bdev_raid_delete", 00:03:34.602 "bdev_raid_create", 00:03:34.602 "bdev_raid_get_bdevs", 00:03:34.602 "bdev_error_inject_error", 00:03:34.602 "bdev_error_delete", 00:03:34.602 "bdev_error_create", 00:03:34.602 "bdev_split_delete", 00:03:34.602 "bdev_split_create", 00:03:34.602 "bdev_delay_delete", 00:03:34.602 "bdev_delay_create", 00:03:34.602 "bdev_delay_update_latency", 00:03:34.602 "bdev_zone_block_delete", 00:03:34.602 "bdev_zone_block_create", 00:03:34.602 "blobfs_create", 00:03:34.602 "blobfs_detect", 00:03:34.602 "blobfs_set_cache_size", 00:03:34.602 "bdev_aio_delete", 00:03:34.602 "bdev_aio_rescan", 00:03:34.602 "bdev_aio_create", 00:03:34.602 "bdev_ftl_set_property", 00:03:34.602 "bdev_ftl_get_properties", 00:03:34.602 "bdev_ftl_get_stats", 00:03:34.602 "bdev_ftl_unmap", 00:03:34.602 "bdev_ftl_unload", 00:03:34.602 "bdev_ftl_delete", 00:03:34.602 "bdev_ftl_load", 00:03:34.602 "bdev_ftl_create", 00:03:34.602 "bdev_virtio_attach_controller", 00:03:34.602 "bdev_virtio_scsi_get_devices", 00:03:34.602 "bdev_virtio_detach_controller", 00:03:34.602 "bdev_virtio_blk_set_hotplug", 00:03:34.602 "bdev_iscsi_delete", 00:03:34.602 "bdev_iscsi_create", 00:03:34.602 "bdev_iscsi_set_options", 00:03:34.602 "accel_error_inject_error", 00:03:34.602 "ioat_scan_accel_module", 00:03:34.602 "dsa_scan_accel_module", 00:03:34.602 "iaa_scan_accel_module", 00:03:34.602 "vfu_virtio_create_fs_endpoint", 00:03:34.602 "vfu_virtio_create_scsi_endpoint", 00:03:34.602 "vfu_virtio_scsi_remove_target", 00:03:34.602 "vfu_virtio_scsi_add_target", 00:03:34.602 "vfu_virtio_create_blk_endpoint", 00:03:34.602 "vfu_virtio_delete_endpoint", 00:03:34.602 "keyring_file_remove_key", 00:03:34.602 "keyring_file_add_key", 00:03:34.602 "keyring_linux_set_options", 00:03:34.602 "fsdev_aio_delete", 00:03:34.602 "fsdev_aio_create", 00:03:34.602 "iscsi_get_histogram", 00:03:34.602 "iscsi_enable_histogram", 00:03:34.602 "iscsi_set_options", 00:03:34.602 "iscsi_get_auth_groups", 00:03:34.602 "iscsi_auth_group_remove_secret", 00:03:34.602 "iscsi_auth_group_add_secret", 00:03:34.602 "iscsi_delete_auth_group", 00:03:34.602 "iscsi_create_auth_group", 00:03:34.602 "iscsi_set_discovery_auth", 00:03:34.602 "iscsi_get_options", 00:03:34.602 "iscsi_target_node_request_logout", 00:03:34.602 "iscsi_target_node_set_redirect", 00:03:34.602 "iscsi_target_node_set_auth", 00:03:34.602 "iscsi_target_node_add_lun", 00:03:34.602 "iscsi_get_stats", 00:03:34.602 "iscsi_get_connections", 00:03:34.602 "iscsi_portal_group_set_auth", 00:03:34.602 "iscsi_start_portal_group", 00:03:34.602 "iscsi_delete_portal_group", 00:03:34.602 "iscsi_create_portal_group", 00:03:34.602 "iscsi_get_portal_groups", 00:03:34.602 "iscsi_delete_target_node", 00:03:34.602 "iscsi_target_node_remove_pg_ig_maps", 00:03:34.602 "iscsi_target_node_add_pg_ig_maps", 00:03:34.602 "iscsi_create_target_node", 00:03:34.602 "iscsi_get_target_nodes", 00:03:34.602 "iscsi_delete_initiator_group", 00:03:34.602 "iscsi_initiator_group_remove_initiators", 00:03:34.602 "iscsi_initiator_group_add_initiators", 00:03:34.602 "iscsi_create_initiator_group", 00:03:34.602 "iscsi_get_initiator_groups", 00:03:34.602 "nvmf_set_crdt", 00:03:34.602 "nvmf_set_config", 00:03:34.602 "nvmf_set_max_subsystems", 00:03:34.602 "nvmf_stop_mdns_prr", 00:03:34.602 "nvmf_publish_mdns_prr", 00:03:34.602 "nvmf_subsystem_get_listeners", 00:03:34.602 "nvmf_subsystem_get_qpairs", 00:03:34.602 "nvmf_subsystem_get_controllers", 00:03:34.602 "nvmf_get_stats", 00:03:34.602 "nvmf_get_transports", 00:03:34.602 "nvmf_create_transport", 00:03:34.602 "nvmf_get_targets", 00:03:34.602 "nvmf_delete_target", 00:03:34.602 "nvmf_create_target", 00:03:34.602 "nvmf_subsystem_allow_any_host", 00:03:34.602 "nvmf_subsystem_set_keys", 00:03:34.602 "nvmf_subsystem_remove_host", 00:03:34.602 "nvmf_subsystem_add_host", 00:03:34.602 "nvmf_ns_remove_host", 00:03:34.602 "nvmf_ns_add_host", 00:03:34.602 "nvmf_subsystem_remove_ns", 00:03:34.602 "nvmf_subsystem_set_ns_ana_group", 00:03:34.602 "nvmf_subsystem_add_ns", 00:03:34.602 "nvmf_subsystem_listener_set_ana_state", 00:03:34.602 "nvmf_discovery_get_referrals", 00:03:34.602 "nvmf_discovery_remove_referral", 00:03:34.602 "nvmf_discovery_add_referral", 00:03:34.602 "nvmf_subsystem_remove_listener", 00:03:34.602 "nvmf_subsystem_add_listener", 00:03:34.602 "nvmf_delete_subsystem", 00:03:34.602 "nvmf_create_subsystem", 00:03:34.602 "nvmf_get_subsystems", 00:03:34.602 "env_dpdk_get_mem_stats", 00:03:34.602 "nbd_get_disks", 00:03:34.602 "nbd_stop_disk", 00:03:34.602 "nbd_start_disk", 00:03:34.602 "ublk_recover_disk", 00:03:34.602 "ublk_get_disks", 00:03:34.602 "ublk_stop_disk", 00:03:34.602 "ublk_start_disk", 00:03:34.602 "ublk_destroy_target", 00:03:34.602 "ublk_create_target", 00:03:34.602 "virtio_blk_create_transport", 00:03:34.602 "virtio_blk_get_transports", 00:03:34.602 "vhost_controller_set_coalescing", 00:03:34.602 "vhost_get_controllers", 00:03:34.602 "vhost_delete_controller", 00:03:34.602 "vhost_create_blk_controller", 00:03:34.602 "vhost_scsi_controller_remove_target", 00:03:34.602 "vhost_scsi_controller_add_target", 00:03:34.602 "vhost_start_scsi_controller", 00:03:34.602 "vhost_create_scsi_controller", 00:03:34.602 "thread_set_cpumask", 00:03:34.602 "scheduler_set_options", 00:03:34.602 "framework_get_governor", 00:03:34.602 "framework_get_scheduler", 00:03:34.602 "framework_set_scheduler", 00:03:34.602 "framework_get_reactors", 00:03:34.602 "thread_get_io_channels", 00:03:34.602 "thread_get_pollers", 00:03:34.602 "thread_get_stats", 00:03:34.602 "framework_monitor_context_switch", 00:03:34.602 "spdk_kill_instance", 00:03:34.602 "log_enable_timestamps", 00:03:34.602 "log_get_flags", 00:03:34.602 "log_clear_flag", 00:03:34.602 "log_set_flag", 00:03:34.602 "log_get_level", 00:03:34.602 "log_set_level", 00:03:34.602 "log_get_print_level", 00:03:34.602 "log_set_print_level", 00:03:34.602 "framework_enable_cpumask_locks", 00:03:34.602 "framework_disable_cpumask_locks", 00:03:34.602 "framework_wait_init", 00:03:34.602 "framework_start_init", 00:03:34.602 "scsi_get_devices", 00:03:34.602 "bdev_get_histogram", 00:03:34.602 "bdev_enable_histogram", 00:03:34.602 "bdev_set_qos_limit", 00:03:34.602 "bdev_set_qd_sampling_period", 00:03:34.602 "bdev_get_bdevs", 00:03:34.602 "bdev_reset_iostat", 00:03:34.602 "bdev_get_iostat", 00:03:34.602 "bdev_examine", 00:03:34.602 "bdev_wait_for_examine", 00:03:34.602 "bdev_set_options", 00:03:34.602 "accel_get_stats", 00:03:34.602 "accel_set_options", 00:03:34.602 "accel_set_driver", 00:03:34.602 "accel_crypto_key_destroy", 00:03:34.602 "accel_crypto_keys_get", 00:03:34.602 "accel_crypto_key_create", 00:03:34.602 "accel_assign_opc", 00:03:34.602 "accel_get_module_info", 00:03:34.602 "accel_get_opc_assignments", 00:03:34.602 "vmd_rescan", 00:03:34.602 "vmd_remove_device", 00:03:34.602 "vmd_enable", 00:03:34.602 "sock_get_default_impl", 00:03:34.602 "sock_set_default_impl", 00:03:34.602 "sock_impl_set_options", 00:03:34.602 "sock_impl_get_options", 00:03:34.602 "iobuf_get_stats", 00:03:34.602 "iobuf_set_options", 00:03:34.602 "keyring_get_keys", 00:03:34.602 "vfu_tgt_set_base_path", 00:03:34.602 "framework_get_pci_devices", 00:03:34.602 "framework_get_config", 00:03:34.602 "framework_get_subsystems", 00:03:34.602 "fsdev_set_opts", 00:03:34.602 "fsdev_get_opts", 00:03:34.602 "trace_get_info", 00:03:34.602 "trace_get_tpoint_group_mask", 00:03:34.602 "trace_disable_tpoint_group", 00:03:34.602 "trace_enable_tpoint_group", 00:03:34.602 "trace_clear_tpoint_mask", 00:03:34.602 "trace_set_tpoint_mask", 00:03:34.602 "notify_get_notifications", 00:03:34.602 "notify_get_types", 00:03:34.602 "spdk_get_version", 00:03:34.602 "rpc_get_methods" 00:03:34.602 ] 00:03:34.602 04:58:11 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:03:34.602 04:58:11 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:34.602 04:58:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:34.602 04:58:11 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:03:34.602 04:58:11 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3391147 00:03:34.602 04:58:11 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3391147 ']' 00:03:34.602 04:58:11 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3391147 00:03:34.602 04:58:11 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:03:34.602 04:58:11 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:34.602 04:58:11 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3391147 00:03:34.602 04:58:11 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:34.602 04:58:11 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:34.602 04:58:11 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3391147' 00:03:34.602 killing process with pid 3391147 00:03:34.602 04:58:11 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3391147 00:03:34.603 04:58:11 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3391147 00:03:34.861 00:03:34.861 real 0m1.183s 00:03:34.861 user 0m1.973s 00:03:34.861 sys 0m0.442s 00:03:34.861 04:58:11 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.861 04:58:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:34.861 ************************************ 00:03:34.861 END TEST spdkcli_tcp 00:03:34.861 ************************************ 00:03:34.861 04:58:11 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:34.861 04:58:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.861 04:58:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.861 04:58:11 -- common/autotest_common.sh@10 -- # set +x 00:03:35.120 ************************************ 00:03:35.120 START TEST dpdk_mem_utility 00:03:35.120 ************************************ 00:03:35.120 04:58:11 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:35.120 * Looking for test storage... 00:03:35.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:03:35.121 04:58:11 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:35.121 04:58:11 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:35.121 04:58:11 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:03:35.121 04:58:11 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:35.121 04:58:11 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:03:35.121 04:58:11 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:35.121 04:58:11 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:35.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.121 --rc genhtml_branch_coverage=1 00:03:35.121 --rc genhtml_function_coverage=1 00:03:35.121 --rc genhtml_legend=1 00:03:35.121 --rc geninfo_all_blocks=1 00:03:35.121 --rc geninfo_unexecuted_blocks=1 00:03:35.121 00:03:35.121 ' 00:03:35.121 04:58:11 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:35.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.121 --rc genhtml_branch_coverage=1 00:03:35.121 --rc genhtml_function_coverage=1 00:03:35.121 --rc genhtml_legend=1 00:03:35.121 --rc geninfo_all_blocks=1 00:03:35.121 --rc geninfo_unexecuted_blocks=1 00:03:35.121 00:03:35.121 ' 00:03:35.121 04:58:11 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:35.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.121 --rc genhtml_branch_coverage=1 00:03:35.121 --rc genhtml_function_coverage=1 00:03:35.121 --rc genhtml_legend=1 00:03:35.121 --rc geninfo_all_blocks=1 00:03:35.121 --rc geninfo_unexecuted_blocks=1 00:03:35.121 00:03:35.121 ' 00:03:35.121 04:58:11 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:35.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.121 --rc genhtml_branch_coverage=1 00:03:35.121 --rc genhtml_function_coverage=1 00:03:35.121 --rc genhtml_legend=1 00:03:35.121 --rc geninfo_all_blocks=1 00:03:35.121 --rc geninfo_unexecuted_blocks=1 00:03:35.121 00:03:35.121 ' 00:03:35.121 04:58:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:35.121 04:58:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3391419 00:03:35.121 04:58:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3391419 00:03:35.121 04:58:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:35.121 04:58:11 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3391419 ']' 00:03:35.121 04:58:11 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:35.121 04:58:11 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:35.121 04:58:11 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:35.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:35.121 04:58:11 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:35.121 04:58:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:35.121 [2024-12-09 04:58:11.745727] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:03:35.121 [2024-12-09 04:58:11.745780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3391419 ] 00:03:35.380 [2024-12-09 04:58:11.810398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:35.380 [2024-12-09 04:58:11.853026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:35.639 04:58:12 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:35.639 04:58:12 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:03:35.639 04:58:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:03:35.639 04:58:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:03:35.639 04:58:12 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.639 04:58:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:35.639 { 00:03:35.639 "filename": "/tmp/spdk_mem_dump.txt" 00:03:35.639 } 00:03:35.639 04:58:12 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.639 04:58:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:35.639 DPDK memory size 818.000000 MiB in 1 heap(s) 00:03:35.639 1 heaps totaling size 818.000000 MiB 00:03:35.639 size: 818.000000 MiB heap id: 0 00:03:35.639 end heaps---------- 00:03:35.639 9 mempools totaling size 603.782043 MiB 00:03:35.639 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:03:35.639 size: 158.602051 MiB name: PDU_data_out_Pool 00:03:35.639 size: 100.555481 MiB name: bdev_io_3391419 00:03:35.639 size: 50.003479 MiB name: msgpool_3391419 00:03:35.639 size: 36.509338 MiB name: fsdev_io_3391419 00:03:35.639 size: 21.763794 MiB name: PDU_Pool 00:03:35.639 size: 19.513306 MiB name: SCSI_TASK_Pool 00:03:35.639 size: 4.133484 MiB name: evtpool_3391419 00:03:35.639 size: 0.026123 MiB name: Session_Pool 00:03:35.639 end mempools------- 00:03:35.639 6 memzones totaling size 4.142822 MiB 00:03:35.639 size: 1.000366 MiB name: RG_ring_0_3391419 00:03:35.639 size: 1.000366 MiB name: RG_ring_1_3391419 00:03:35.639 size: 1.000366 MiB name: RG_ring_4_3391419 00:03:35.639 size: 1.000366 MiB name: RG_ring_5_3391419 00:03:35.639 size: 0.125366 MiB name: RG_ring_2_3391419 00:03:35.639 size: 0.015991 MiB name: RG_ring_3_3391419 00:03:35.639 end memzones------- 00:03:35.639 04:58:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:03:35.640 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:03:35.640 list of free elements. size: 10.852478 MiB 00:03:35.640 element at address: 0x200019200000 with size: 0.999878 MiB 00:03:35.640 element at address: 0x200019400000 with size: 0.999878 MiB 00:03:35.640 element at address: 0x200000400000 with size: 0.998535 MiB 00:03:35.640 element at address: 0x200032000000 with size: 0.994446 MiB 00:03:35.640 element at address: 0x200006400000 with size: 0.959839 MiB 00:03:35.640 element at address: 0x200012c00000 with size: 0.944275 MiB 00:03:35.640 element at address: 0x200019600000 with size: 0.936584 MiB 00:03:35.640 element at address: 0x200000200000 with size: 0.717346 MiB 00:03:35.640 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:03:35.640 element at address: 0x200000c00000 with size: 0.495422 MiB 00:03:35.640 element at address: 0x20000a600000 with size: 0.490723 MiB 00:03:35.640 element at address: 0x200019800000 with size: 0.485657 MiB 00:03:35.640 element at address: 0x200003e00000 with size: 0.481934 MiB 00:03:35.640 element at address: 0x200028200000 with size: 0.410034 MiB 00:03:35.640 element at address: 0x200000800000 with size: 0.355042 MiB 00:03:35.640 list of standard malloc elements. size: 199.218628 MiB 00:03:35.640 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:03:35.640 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:03:35.640 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:03:35.640 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:03:35.640 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:03:35.640 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:03:35.640 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:03:35.640 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:03:35.640 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:03:35.640 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:03:35.640 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:03:35.640 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:03:35.640 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:03:35.640 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:03:35.640 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:03:35.640 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:03:35.640 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:03:35.640 element at address: 0x20000085b040 with size: 0.000183 MiB 00:03:35.640 element at address: 0x20000085f300 with size: 0.000183 MiB 00:03:35.640 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:03:35.640 element at address: 0x20000087f680 with size: 0.000183 MiB 00:03:35.640 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:03:35.640 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:03:35.640 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:03:35.640 element at address: 0x200000cff000 with size: 0.000183 MiB 00:03:35.640 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:03:35.640 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:03:35.640 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:03:35.640 element at address: 0x200003efb980 with size: 0.000183 MiB 00:03:35.640 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:03:35.640 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:03:35.640 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:03:35.640 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:03:35.640 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:03:35.640 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:03:35.640 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:03:35.640 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:03:35.640 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:03:35.640 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:03:35.640 element at address: 0x200028268f80 with size: 0.000183 MiB 00:03:35.640 element at address: 0x200028269040 with size: 0.000183 MiB 00:03:35.640 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:03:35.640 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:03:35.640 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:03:35.640 list of memzone associated elements. size: 607.928894 MiB 00:03:35.640 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:03:35.640 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:03:35.640 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:03:35.640 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:03:35.640 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:03:35.640 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3391419_0 00:03:35.640 element at address: 0x200000dff380 with size: 48.003052 MiB 00:03:35.640 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3391419_0 00:03:35.640 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:03:35.640 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3391419_0 00:03:35.640 element at address: 0x2000199be940 with size: 20.255554 MiB 00:03:35.640 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:03:35.640 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:03:35.640 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:03:35.640 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:03:35.640 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3391419_0 00:03:35.640 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:03:35.640 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3391419 00:03:35.640 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:03:35.640 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3391419 00:03:35.640 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:03:35.640 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:03:35.640 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:03:35.640 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:03:35.640 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:03:35.640 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:03:35.640 element at address: 0x200003efba40 with size: 1.008118 MiB 00:03:35.640 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:03:35.640 element at address: 0x200000cff180 with size: 1.000488 MiB 00:03:35.640 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3391419 00:03:35.640 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:03:35.640 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3391419 00:03:35.640 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:03:35.640 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3391419 00:03:35.640 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:03:35.640 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3391419 00:03:35.640 element at address: 0x20000087f740 with size: 0.500488 MiB 00:03:35.640 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3391419 00:03:35.640 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:03:35.640 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3391419 00:03:35.640 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:03:35.640 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:03:35.640 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:03:35.640 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:03:35.640 element at address: 0x20001987c540 with size: 0.250488 MiB 00:03:35.640 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:03:35.640 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:03:35.640 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3391419 00:03:35.640 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:03:35.640 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3391419 00:03:35.640 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:03:35.640 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:03:35.640 element at address: 0x200028269100 with size: 0.023743 MiB 00:03:35.640 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:03:35.640 element at address: 0x20000085b100 with size: 0.016113 MiB 00:03:35.640 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3391419 00:03:35.640 element at address: 0x20002826f240 with size: 0.002441 MiB 00:03:35.640 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:03:35.640 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:03:35.640 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3391419 00:03:35.640 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:03:35.640 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3391419 00:03:35.640 element at address: 0x20000085af00 with size: 0.000305 MiB 00:03:35.640 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3391419 00:03:35.640 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:03:35.640 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:03:35.640 04:58:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:03:35.640 04:58:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3391419 00:03:35.640 04:58:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3391419 ']' 00:03:35.640 04:58:12 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3391419 00:03:35.640 04:58:12 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:03:35.640 04:58:12 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:35.640 04:58:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3391419 00:03:35.640 04:58:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:35.640 04:58:12 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:35.640 04:58:12 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3391419' 00:03:35.640 killing process with pid 3391419 00:03:35.640 04:58:12 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3391419 00:03:35.640 04:58:12 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3391419 00:03:36.208 00:03:36.208 real 0m1.033s 00:03:36.208 user 0m0.984s 00:03:36.208 sys 0m0.397s 00:03:36.208 04:58:12 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:36.208 04:58:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:36.208 ************************************ 00:03:36.208 END TEST dpdk_mem_utility 00:03:36.208 ************************************ 00:03:36.208 04:58:12 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:36.208 04:58:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:36.208 04:58:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:36.208 04:58:12 -- common/autotest_common.sh@10 -- # set +x 00:03:36.208 ************************************ 00:03:36.208 START TEST event 00:03:36.208 ************************************ 00:03:36.208 04:58:12 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:36.208 * Looking for test storage... 00:03:36.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:03:36.208 04:58:12 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:36.208 04:58:12 event -- common/autotest_common.sh@1693 -- # lcov --version 00:03:36.208 04:58:12 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:36.208 04:58:12 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:36.208 04:58:12 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:36.208 04:58:12 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:36.208 04:58:12 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:36.208 04:58:12 event -- scripts/common.sh@336 -- # IFS=.-: 00:03:36.208 04:58:12 event -- scripts/common.sh@336 -- # read -ra ver1 00:03:36.208 04:58:12 event -- scripts/common.sh@337 -- # IFS=.-: 00:03:36.208 04:58:12 event -- scripts/common.sh@337 -- # read -ra ver2 00:03:36.208 04:58:12 event -- scripts/common.sh@338 -- # local 'op=<' 00:03:36.208 04:58:12 event -- scripts/common.sh@340 -- # ver1_l=2 00:03:36.208 04:58:12 event -- scripts/common.sh@341 -- # ver2_l=1 00:03:36.208 04:58:12 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:36.208 04:58:12 event -- scripts/common.sh@344 -- # case "$op" in 00:03:36.208 04:58:12 event -- scripts/common.sh@345 -- # : 1 00:03:36.208 04:58:12 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:36.208 04:58:12 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:36.208 04:58:12 event -- scripts/common.sh@365 -- # decimal 1 00:03:36.208 04:58:12 event -- scripts/common.sh@353 -- # local d=1 00:03:36.208 04:58:12 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:36.208 04:58:12 event -- scripts/common.sh@355 -- # echo 1 00:03:36.208 04:58:12 event -- scripts/common.sh@365 -- # ver1[v]=1 00:03:36.208 04:58:12 event -- scripts/common.sh@366 -- # decimal 2 00:03:36.208 04:58:12 event -- scripts/common.sh@353 -- # local d=2 00:03:36.208 04:58:12 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:36.208 04:58:12 event -- scripts/common.sh@355 -- # echo 2 00:03:36.208 04:58:12 event -- scripts/common.sh@366 -- # ver2[v]=2 00:03:36.208 04:58:12 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:36.208 04:58:12 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:36.208 04:58:12 event -- scripts/common.sh@368 -- # return 0 00:03:36.208 04:58:12 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:36.208 04:58:12 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:36.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.208 --rc genhtml_branch_coverage=1 00:03:36.208 --rc genhtml_function_coverage=1 00:03:36.208 --rc genhtml_legend=1 00:03:36.208 --rc geninfo_all_blocks=1 00:03:36.208 --rc geninfo_unexecuted_blocks=1 00:03:36.208 00:03:36.208 ' 00:03:36.208 04:58:12 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:36.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.208 --rc genhtml_branch_coverage=1 00:03:36.208 --rc genhtml_function_coverage=1 00:03:36.208 --rc genhtml_legend=1 00:03:36.208 --rc geninfo_all_blocks=1 00:03:36.208 --rc geninfo_unexecuted_blocks=1 00:03:36.208 00:03:36.208 ' 00:03:36.208 04:58:12 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:36.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.208 --rc genhtml_branch_coverage=1 00:03:36.208 --rc genhtml_function_coverage=1 00:03:36.208 --rc genhtml_legend=1 00:03:36.208 --rc geninfo_all_blocks=1 00:03:36.208 --rc geninfo_unexecuted_blocks=1 00:03:36.209 00:03:36.209 ' 00:03:36.209 04:58:12 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:36.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.209 --rc genhtml_branch_coverage=1 00:03:36.209 --rc genhtml_function_coverage=1 00:03:36.209 --rc genhtml_legend=1 00:03:36.209 --rc geninfo_all_blocks=1 00:03:36.209 --rc geninfo_unexecuted_blocks=1 00:03:36.209 00:03:36.209 ' 00:03:36.209 04:58:12 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:03:36.209 04:58:12 event -- bdev/nbd_common.sh@6 -- # set -e 00:03:36.209 04:58:12 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:36.209 04:58:12 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:03:36.209 04:58:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:36.209 04:58:12 event -- common/autotest_common.sh@10 -- # set +x 00:03:36.209 ************************************ 00:03:36.209 START TEST event_perf 00:03:36.209 ************************************ 00:03:36.209 04:58:12 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:36.209 Running I/O for 1 seconds...[2024-12-09 04:58:12.846400] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:03:36.209 [2024-12-09 04:58:12.846471] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3391558 ] 00:03:36.467 [2024-12-09 04:58:12.913873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:36.467 [2024-12-09 04:58:12.958229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:36.467 [2024-12-09 04:58:12.958326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:36.467 [2024-12-09 04:58:12.958411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:36.467 [2024-12-09 04:58:12.958413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:37.399 Running I/O for 1 seconds... 00:03:37.399 lcore 0: 203140 00:03:37.399 lcore 1: 203139 00:03:37.399 lcore 2: 203140 00:03:37.399 lcore 3: 203140 00:03:37.399 done. 00:03:37.399 00:03:37.399 real 0m1.207s 00:03:37.399 user 0m4.130s 00:03:37.399 sys 0m0.075s 00:03:37.399 04:58:14 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:37.399 04:58:14 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:03:37.399 ************************************ 00:03:37.399 END TEST event_perf 00:03:37.399 ************************************ 00:03:37.656 04:58:14 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:37.656 04:58:14 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:03:37.656 04:58:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:37.656 04:58:14 event -- common/autotest_common.sh@10 -- # set +x 00:03:37.656 ************************************ 00:03:37.656 START TEST event_reactor 00:03:37.656 ************************************ 00:03:37.656 04:58:14 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:37.656 [2024-12-09 04:58:14.106683] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:03:37.656 [2024-12-09 04:58:14.106733] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3391782 ] 00:03:37.656 [2024-12-09 04:58:14.173147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:37.656 [2024-12-09 04:58:14.213909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:39.031 test_start 00:03:39.031 oneshot 00:03:39.031 tick 100 00:03:39.031 tick 100 00:03:39.031 tick 250 00:03:39.031 tick 100 00:03:39.031 tick 100 00:03:39.031 tick 100 00:03:39.031 tick 250 00:03:39.031 tick 500 00:03:39.031 tick 100 00:03:39.031 tick 100 00:03:39.031 tick 250 00:03:39.031 tick 100 00:03:39.031 tick 100 00:03:39.031 test_end 00:03:39.031 00:03:39.031 real 0m1.192s 00:03:39.031 user 0m1.126s 00:03:39.031 sys 0m0.062s 00:03:39.031 04:58:15 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.031 04:58:15 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:03:39.031 ************************************ 00:03:39.031 END TEST event_reactor 00:03:39.031 ************************************ 00:03:39.031 04:58:15 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:39.031 04:58:15 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:03:39.031 04:58:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.031 04:58:15 event -- common/autotest_common.sh@10 -- # set +x 00:03:39.031 ************************************ 00:03:39.031 START TEST event_reactor_perf 00:03:39.031 ************************************ 00:03:39.031 04:58:15 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:39.031 [2024-12-09 04:58:15.371216] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:03:39.031 [2024-12-09 04:58:15.371284] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3392034 ] 00:03:39.031 [2024-12-09 04:58:15.439563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:39.031 [2024-12-09 04:58:15.479256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:39.967 test_start 00:03:39.967 test_end 00:03:39.967 Performance: 502991 events per second 00:03:39.967 00:03:39.967 real 0m1.201s 00:03:39.967 user 0m1.130s 00:03:39.967 sys 0m0.066s 00:03:39.967 04:58:16 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.967 04:58:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:03:39.967 ************************************ 00:03:39.967 END TEST event_reactor_perf 00:03:39.967 ************************************ 00:03:39.967 04:58:16 event -- event/event.sh@49 -- # uname -s 00:03:39.967 04:58:16 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:03:39.967 04:58:16 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:39.967 04:58:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.967 04:58:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.967 04:58:16 event -- common/autotest_common.sh@10 -- # set +x 00:03:40.226 ************************************ 00:03:40.226 START TEST event_scheduler 00:03:40.226 ************************************ 00:03:40.226 04:58:16 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:40.226 * Looking for test storage... 00:03:40.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:03:40.226 04:58:16 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:40.226 04:58:16 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:03:40.226 04:58:16 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:40.226 04:58:16 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:03:40.226 04:58:16 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:40.227 04:58:16 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:40.227 04:58:16 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:03:40.227 04:58:16 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:40.227 04:58:16 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:40.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.227 --rc genhtml_branch_coverage=1 00:03:40.227 --rc genhtml_function_coverage=1 00:03:40.227 --rc genhtml_legend=1 00:03:40.227 --rc geninfo_all_blocks=1 00:03:40.227 --rc geninfo_unexecuted_blocks=1 00:03:40.227 00:03:40.227 ' 00:03:40.227 04:58:16 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:40.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.227 --rc genhtml_branch_coverage=1 00:03:40.227 --rc genhtml_function_coverage=1 00:03:40.227 --rc genhtml_legend=1 00:03:40.227 --rc geninfo_all_blocks=1 00:03:40.227 --rc geninfo_unexecuted_blocks=1 00:03:40.227 00:03:40.227 ' 00:03:40.227 04:58:16 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:40.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.227 --rc genhtml_branch_coverage=1 00:03:40.227 --rc genhtml_function_coverage=1 00:03:40.227 --rc genhtml_legend=1 00:03:40.227 --rc geninfo_all_blocks=1 00:03:40.227 --rc geninfo_unexecuted_blocks=1 00:03:40.227 00:03:40.227 ' 00:03:40.227 04:58:16 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:40.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.227 --rc genhtml_branch_coverage=1 00:03:40.227 --rc genhtml_function_coverage=1 00:03:40.227 --rc genhtml_legend=1 00:03:40.227 --rc geninfo_all_blocks=1 00:03:40.227 --rc geninfo_unexecuted_blocks=1 00:03:40.227 00:03:40.227 ' 00:03:40.227 04:58:16 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:03:40.227 04:58:16 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3392321 00:03:40.227 04:58:16 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:03:40.227 04:58:16 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:03:40.227 04:58:16 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3392321 00:03:40.227 04:58:16 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3392321 ']' 00:03:40.227 04:58:16 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:40.227 04:58:16 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:40.227 04:58:16 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:40.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:40.227 04:58:16 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:40.227 04:58:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:40.227 [2024-12-09 04:58:16.824380] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:03:40.227 [2024-12-09 04:58:16.824430] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3392321 ] 00:03:40.485 [2024-12-09 04:58:16.884314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:40.485 [2024-12-09 04:58:16.928454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:40.485 [2024-12-09 04:58:16.928542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:40.485 [2024-12-09 04:58:16.928625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:40.485 [2024-12-09 04:58:16.928627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:40.485 04:58:16 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:40.485 04:58:16 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:03:40.485 04:58:16 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:03:40.485 04:58:16 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:40.486 04:58:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:40.486 [2024-12-09 04:58:16.993202] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:03:40.486 [2024-12-09 04:58:16.993220] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:03:40.486 [2024-12-09 04:58:16.993230] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:03:40.486 [2024-12-09 04:58:16.993236] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:03:40.486 [2024-12-09 04:58:16.993241] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:03:40.486 04:58:16 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:40.486 04:58:16 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:03:40.486 04:58:16 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:40.486 04:58:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:40.486 [2024-12-09 04:58:17.069209] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:03:40.486 04:58:17 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:40.486 04:58:17 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:03:40.486 04:58:17 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:40.486 04:58:17 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.486 04:58:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:40.486 ************************************ 00:03:40.486 START TEST scheduler_create_thread 00:03:40.486 ************************************ 00:03:40.486 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:03:40.486 04:58:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:03:40.486 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:40.486 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:40.486 2 00:03:40.486 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:40.486 04:58:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:03:40.486 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:40.486 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:40.486 3 00:03:40.486 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:40.486 04:58:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:03:40.486 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:40.486 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:40.744 4 00:03:40.744 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:40.744 04:58:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:03:40.744 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:40.744 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:40.744 5 00:03:40.744 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:40.744 04:58:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:03:40.744 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:40.744 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:40.744 6 00:03:40.744 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:40.744 04:58:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:03:40.744 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:40.744 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:40.744 7 00:03:40.744 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:40.744 04:58:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:03:40.744 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:40.744 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:40.744 8 00:03:40.744 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:40.744 04:58:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:03:40.744 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:40.744 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:40.744 9 00:03:40.744 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:40.745 04:58:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:03:40.745 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:40.745 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:40.745 10 00:03:40.745 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:40.745 04:58:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:03:40.745 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:40.745 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:40.745 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:40.745 04:58:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:03:40.745 04:58:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:03:40.745 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:40.745 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:41.310 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.310 04:58:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:03:41.310 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.310 04:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:42.682 04:58:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.682 04:58:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:03:42.682 04:58:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:03:42.682 04:58:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.682 04:58:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:43.614 04:58:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:43.614 00:03:43.614 real 0m3.102s 00:03:43.614 user 0m0.026s 00:03:43.614 sys 0m0.004s 00:03:43.614 04:58:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:43.614 04:58:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:43.614 ************************************ 00:03:43.614 END TEST scheduler_create_thread 00:03:43.614 ************************************ 00:03:43.614 04:58:20 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:03:43.614 04:58:20 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3392321 00:03:43.614 04:58:20 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3392321 ']' 00:03:43.614 04:58:20 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3392321 00:03:43.614 04:58:20 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:03:43.614 04:58:20 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:43.614 04:58:20 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3392321 00:03:43.872 04:58:20 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:03:43.872 04:58:20 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:03:43.872 04:58:20 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3392321' 00:03:43.872 killing process with pid 3392321 00:03:43.872 04:58:20 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3392321 00:03:43.872 04:58:20 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3392321 00:03:44.130 [2024-12-09 04:58:20.584652] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:03:44.388 00:03:44.388 real 0m4.190s 00:03:44.388 user 0m6.761s 00:03:44.388 sys 0m0.346s 00:03:44.388 04:58:20 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:44.388 04:58:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:44.388 ************************************ 00:03:44.388 END TEST event_scheduler 00:03:44.388 ************************************ 00:03:44.388 04:58:20 event -- event/event.sh@51 -- # modprobe -n nbd 00:03:44.388 04:58:20 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:03:44.388 04:58:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.388 04:58:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.388 04:58:20 event -- common/autotest_common.sh@10 -- # set +x 00:03:44.388 ************************************ 00:03:44.388 START TEST app_repeat 00:03:44.388 ************************************ 00:03:44.388 04:58:20 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:03:44.388 04:58:20 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:44.388 04:58:20 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:44.388 04:58:20 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:03:44.388 04:58:20 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:44.388 04:58:20 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:03:44.388 04:58:20 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:03:44.388 04:58:20 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:03:44.388 04:58:20 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3393059 00:03:44.388 04:58:20 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:03:44.388 04:58:20 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:03:44.388 04:58:20 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3393059' 00:03:44.388 Process app_repeat pid: 3393059 00:03:44.388 04:58:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:03:44.388 04:58:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:03:44.388 spdk_app_start Round 0 00:03:44.388 04:58:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3393059 /var/tmp/spdk-nbd.sock 00:03:44.388 04:58:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3393059 ']' 00:03:44.388 04:58:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:03:44.388 04:58:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:44.388 04:58:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:03:44.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:03:44.388 04:58:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:44.388 04:58:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:03:44.388 [2024-12-09 04:58:20.918386] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:03:44.388 [2024-12-09 04:58:20.918441] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3393059 ] 00:03:44.388 [2024-12-09 04:58:20.985907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:44.388 [2024-12-09 04:58:21.028067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:44.388 [2024-12-09 04:58:21.028070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:44.646 04:58:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:44.646 04:58:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:03:44.646 04:58:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:44.904 Malloc0 00:03:44.904 04:58:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:44.904 Malloc1 00:03:44.904 04:58:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:44.904 04:58:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:44.904 04:58:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:44.904 04:58:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:03:44.904 04:58:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:44.904 04:58:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:03:44.904 04:58:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:44.904 04:58:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:44.904 04:58:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:44.904 04:58:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:03:44.904 04:58:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:44.904 04:58:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:03:44.904 04:58:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:03:44.904 04:58:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:03:44.904 04:58:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:44.904 04:58:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:03:45.162 /dev/nbd0 00:03:45.162 04:58:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:03:45.162 04:58:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:03:45.162 04:58:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:03:45.162 04:58:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:03:45.162 04:58:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:03:45.162 04:58:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:03:45.162 04:58:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:03:45.162 04:58:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:03:45.162 04:58:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:03:45.162 04:58:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:03:45.162 04:58:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:45.162 1+0 records in 00:03:45.162 1+0 records out 00:03:45.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00185272 s, 2.2 MB/s 00:03:45.162 04:58:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:45.162 04:58:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:03:45.162 04:58:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:45.162 04:58:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:03:45.162 04:58:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:03:45.162 04:58:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:45.162 04:58:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:45.162 04:58:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:03:45.420 /dev/nbd1 00:03:45.420 04:58:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:03:45.420 04:58:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:03:45.420 04:58:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:03:45.420 04:58:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:03:45.420 04:58:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:03:45.420 04:58:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:03:45.420 04:58:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:03:45.420 04:58:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:03:45.420 04:58:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:03:45.420 04:58:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:03:45.420 04:58:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:45.420 1+0 records in 00:03:45.420 1+0 records out 00:03:45.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180609 s, 22.7 MB/s 00:03:45.420 04:58:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:45.420 04:58:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:03:45.420 04:58:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:45.420 04:58:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:03:45.420 04:58:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:03:45.420 04:58:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:45.420 04:58:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:45.420 04:58:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:45.420 04:58:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:45.420 04:58:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:03:45.679 { 00:03:45.679 "nbd_device": "/dev/nbd0", 00:03:45.679 "bdev_name": "Malloc0" 00:03:45.679 }, 00:03:45.679 { 00:03:45.679 "nbd_device": "/dev/nbd1", 00:03:45.679 "bdev_name": "Malloc1" 00:03:45.679 } 00:03:45.679 ]' 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:03:45.679 { 00:03:45.679 "nbd_device": "/dev/nbd0", 00:03:45.679 "bdev_name": "Malloc0" 00:03:45.679 }, 00:03:45.679 { 00:03:45.679 "nbd_device": "/dev/nbd1", 00:03:45.679 "bdev_name": "Malloc1" 00:03:45.679 } 00:03:45.679 ]' 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:03:45.679 /dev/nbd1' 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:03:45.679 /dev/nbd1' 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:03:45.679 256+0 records in 00:03:45.679 256+0 records out 00:03:45.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106717 s, 98.3 MB/s 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:03:45.679 256+0 records in 00:03:45.679 256+0 records out 00:03:45.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141484 s, 74.1 MB/s 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:03:45.679 256+0 records in 00:03:45.679 256+0 records out 00:03:45.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150272 s, 69.8 MB/s 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:45.679 04:58:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:03:45.939 04:58:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:03:45.939 04:58:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:03:45.939 04:58:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:03:45.939 04:58:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:45.939 04:58:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:45.939 04:58:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:03:45.939 04:58:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:45.939 04:58:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:45.939 04:58:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:45.939 04:58:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:03:46.197 04:58:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:03:46.197 04:58:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:03:46.197 04:58:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:03:46.197 04:58:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:46.197 04:58:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:46.198 04:58:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:03:46.198 04:58:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:46.198 04:58:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:46.198 04:58:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:46.198 04:58:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:46.198 04:58:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:46.456 04:58:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:03:46.456 04:58:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:03:46.456 04:58:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:46.456 04:58:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:03:46.456 04:58:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:03:46.456 04:58:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:46.456 04:58:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:03:46.456 04:58:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:03:46.456 04:58:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:03:46.456 04:58:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:03:46.456 04:58:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:03:46.456 04:58:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:03:46.456 04:58:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:03:46.715 04:58:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:03:46.715 [2024-12-09 04:58:23.351867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:46.974 [2024-12-09 04:58:23.389905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:46.974 [2024-12-09 04:58:23.389908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.974 [2024-12-09 04:58:23.430929] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:03:46.974 [2024-12-09 04:58:23.430973] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:03:50.258 04:58:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:03:50.258 04:58:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:03:50.258 spdk_app_start Round 1 00:03:50.258 04:58:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3393059 /var/tmp/spdk-nbd.sock 00:03:50.258 04:58:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3393059 ']' 00:03:50.258 04:58:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:03:50.258 04:58:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:50.258 04:58:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:03:50.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:03:50.258 04:58:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:50.258 04:58:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:03:50.258 04:58:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:50.258 04:58:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:03:50.258 04:58:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:50.258 Malloc0 00:03:50.258 04:58:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:50.258 Malloc1 00:03:50.258 04:58:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:50.258 04:58:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:50.258 04:58:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:50.258 04:58:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:03:50.258 04:58:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:50.258 04:58:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:03:50.258 04:58:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:50.258 04:58:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:50.258 04:58:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:50.258 04:58:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:03:50.258 04:58:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:50.258 04:58:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:03:50.258 04:58:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:03:50.258 04:58:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:03:50.258 04:58:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:50.258 04:58:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:03:50.515 /dev/nbd0 00:03:50.515 04:58:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:03:50.515 04:58:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:03:50.515 04:58:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:03:50.515 04:58:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:03:50.515 04:58:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:03:50.515 04:58:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:03:50.515 04:58:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:03:50.515 04:58:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:03:50.516 04:58:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:03:50.516 04:58:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:03:50.516 04:58:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:50.516 1+0 records in 00:03:50.516 1+0 records out 00:03:50.516 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182121 s, 22.5 MB/s 00:03:50.516 04:58:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:50.516 04:58:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:03:50.516 04:58:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:50.516 04:58:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:03:50.516 04:58:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:03:50.516 04:58:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:50.516 04:58:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:50.516 04:58:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:03:50.773 /dev/nbd1 00:03:50.773 04:58:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:03:50.773 04:58:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:03:50.773 04:58:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:03:50.773 04:58:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:03:50.773 04:58:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:03:50.773 04:58:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:03:50.773 04:58:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:03:50.773 04:58:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:03:50.773 04:58:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:03:50.773 04:58:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:03:50.773 04:58:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:50.773 1+0 records in 00:03:50.773 1+0 records out 00:03:50.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188781 s, 21.7 MB/s 00:03:50.773 04:58:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:50.773 04:58:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:03:50.773 04:58:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:50.773 04:58:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:03:50.773 04:58:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:03:50.773 04:58:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:50.773 04:58:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:50.773 04:58:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:50.773 04:58:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:50.773 04:58:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:51.030 04:58:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:03:51.030 { 00:03:51.030 "nbd_device": "/dev/nbd0", 00:03:51.030 "bdev_name": "Malloc0" 00:03:51.030 }, 00:03:51.030 { 00:03:51.030 "nbd_device": "/dev/nbd1", 00:03:51.030 "bdev_name": "Malloc1" 00:03:51.030 } 00:03:51.030 ]' 00:03:51.030 04:58:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:03:51.030 { 00:03:51.030 "nbd_device": "/dev/nbd0", 00:03:51.030 "bdev_name": "Malloc0" 00:03:51.030 }, 00:03:51.030 { 00:03:51.030 "nbd_device": "/dev/nbd1", 00:03:51.030 "bdev_name": "Malloc1" 00:03:51.030 } 00:03:51.030 ]' 00:03:51.030 04:58:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:51.030 04:58:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:03:51.030 /dev/nbd1' 00:03:51.030 04:58:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:51.030 04:58:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:03:51.030 /dev/nbd1' 00:03:51.030 04:58:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:03:51.030 04:58:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:03:51.030 04:58:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:03:51.030 04:58:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:03:51.030 04:58:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:03:51.030 04:58:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:51.030 04:58:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:03:51.031 256+0 records in 00:03:51.031 256+0 records out 00:03:51.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106296 s, 98.6 MB/s 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:03:51.031 256+0 records in 00:03:51.031 256+0 records out 00:03:51.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139128 s, 75.4 MB/s 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:03:51.031 256+0 records in 00:03:51.031 256+0 records out 00:03:51.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155998 s, 67.2 MB/s 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:51.031 04:58:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:03:51.289 04:58:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:03:51.289 04:58:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:03:51.289 04:58:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:03:51.289 04:58:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:51.289 04:58:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:51.289 04:58:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:03:51.289 04:58:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:51.289 04:58:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:51.289 04:58:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:51.289 04:58:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:03:51.547 04:58:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:03:51.547 04:58:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:03:51.547 04:58:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:03:51.547 04:58:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:51.547 04:58:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:51.547 04:58:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:03:51.547 04:58:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:51.547 04:58:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:51.547 04:58:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:51.547 04:58:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:51.547 04:58:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:51.547 04:58:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:03:51.547 04:58:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:03:51.547 04:58:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:51.806 04:58:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:03:51.806 04:58:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:03:51.806 04:58:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:51.806 04:58:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:03:51.806 04:58:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:03:51.806 04:58:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:03:51.806 04:58:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:03:51.806 04:58:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:03:51.806 04:58:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:03:51.806 04:58:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:03:51.806 04:58:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:03:52.064 [2024-12-09 04:58:28.593924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:52.064 [2024-12-09 04:58:28.630947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:52.064 [2024-12-09 04:58:28.630950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.064 [2024-12-09 04:58:28.672714] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:03:52.064 [2024-12-09 04:58:28.672755] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:03:55.348 04:58:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:03:55.348 04:58:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:03:55.348 spdk_app_start Round 2 00:03:55.348 04:58:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3393059 /var/tmp/spdk-nbd.sock 00:03:55.348 04:58:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3393059 ']' 00:03:55.348 04:58:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:03:55.348 04:58:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:55.348 04:58:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:03:55.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:03:55.348 04:58:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:55.348 04:58:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:03:55.348 04:58:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:55.348 04:58:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:03:55.348 04:58:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:55.348 Malloc0 00:03:55.348 04:58:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:55.608 Malloc1 00:03:55.608 04:58:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:55.608 04:58:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:55.608 04:58:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:55.608 04:58:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:03:55.608 04:58:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:55.608 04:58:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:03:55.608 04:58:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:55.608 04:58:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:55.608 04:58:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:55.608 04:58:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:03:55.608 04:58:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:55.608 04:58:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:03:55.608 04:58:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:03:55.608 04:58:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:03:55.608 04:58:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:55.608 04:58:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:03:55.608 /dev/nbd0 00:03:55.867 04:58:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:03:55.867 04:58:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:03:55.867 04:58:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:03:55.867 04:58:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:03:55.867 04:58:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:03:55.867 04:58:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:03:55.867 04:58:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:03:55.867 04:58:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:03:55.867 04:58:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:03:55.867 04:58:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:03:55.867 04:58:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:55.867 1+0 records in 00:03:55.867 1+0 records out 00:03:55.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182897 s, 22.4 MB/s 00:03:55.867 04:58:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:55.867 04:58:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:03:55.867 04:58:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:55.867 04:58:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:03:55.867 04:58:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:03:55.867 04:58:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:55.867 04:58:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:55.867 04:58:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:03:55.867 /dev/nbd1 00:03:55.867 04:58:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:03:55.867 04:58:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:03:55.867 04:58:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:03:56.126 04:58:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:03:56.126 04:58:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:03:56.126 04:58:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:03:56.126 04:58:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:03:56.126 04:58:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:03:56.126 04:58:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:03:56.126 04:58:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:03:56.126 04:58:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:56.126 1+0 records in 00:03:56.126 1+0 records out 00:03:56.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213844 s, 19.2 MB/s 00:03:56.126 04:58:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:56.126 04:58:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:03:56.126 04:58:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:56.126 04:58:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:03:56.126 04:58:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:03:56.126 04:58:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:56.126 04:58:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:56.126 04:58:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:56.126 04:58:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:56.126 04:58:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:56.126 04:58:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:03:56.126 { 00:03:56.126 "nbd_device": "/dev/nbd0", 00:03:56.126 "bdev_name": "Malloc0" 00:03:56.126 }, 00:03:56.126 { 00:03:56.126 "nbd_device": "/dev/nbd1", 00:03:56.126 "bdev_name": "Malloc1" 00:03:56.126 } 00:03:56.126 ]' 00:03:56.126 04:58:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:03:56.126 { 00:03:56.126 "nbd_device": "/dev/nbd0", 00:03:56.126 "bdev_name": "Malloc0" 00:03:56.126 }, 00:03:56.126 { 00:03:56.126 "nbd_device": "/dev/nbd1", 00:03:56.126 "bdev_name": "Malloc1" 00:03:56.126 } 00:03:56.126 ]' 00:03:56.126 04:58:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:56.126 04:58:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:03:56.126 /dev/nbd1' 00:03:56.126 04:58:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:03:56.126 /dev/nbd1' 00:03:56.126 04:58:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:56.126 04:58:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:03:56.126 04:58:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:03:56.126 04:58:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:03:56.126 04:58:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:03:56.126 04:58:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:03:56.384 256+0 records in 00:03:56.384 256+0 records out 00:03:56.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107737 s, 97.3 MB/s 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:03:56.384 256+0 records in 00:03:56.384 256+0 records out 00:03:56.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142255 s, 73.7 MB/s 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:03:56.384 256+0 records in 00:03:56.384 256+0 records out 00:03:56.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154664 s, 67.8 MB/s 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:56.384 04:58:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:03:56.643 04:58:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:03:56.643 04:58:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:03:56.643 04:58:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:03:56.643 04:58:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:56.643 04:58:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:56.643 04:58:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:03:56.643 04:58:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:56.643 04:58:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:56.643 04:58:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:56.643 04:58:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:03:56.643 04:58:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:03:56.643 04:58:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:03:56.643 04:58:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:03:56.643 04:58:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:56.643 04:58:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:56.643 04:58:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:03:56.643 04:58:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:56.643 04:58:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:56.643 04:58:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:56.643 04:58:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:56.643 04:58:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:56.902 04:58:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:03:56.902 04:58:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:03:56.902 04:58:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:56.902 04:58:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:03:56.902 04:58:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:03:56.902 04:58:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:56.902 04:58:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:03:56.902 04:58:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:03:56.902 04:58:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:03:56.902 04:58:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:03:56.902 04:58:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:03:56.902 04:58:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:03:56.902 04:58:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:03:57.161 04:58:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:03:57.420 [2024-12-09 04:58:33.899755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:57.420 [2024-12-09 04:58:33.939307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:57.420 [2024-12-09 04:58:33.939311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.420 [2024-12-09 04:58:33.980820] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:03:57.420 [2024-12-09 04:58:33.980856] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:00.837 04:58:36 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3393059 /var/tmp/spdk-nbd.sock 00:04:00.837 04:58:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3393059 ']' 00:04:00.837 04:58:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:00.837 04:58:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:00.837 04:58:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:00.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:00.837 04:58:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:00.837 04:58:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:00.837 04:58:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:00.837 04:58:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:00.837 04:58:36 event.app_repeat -- event/event.sh@39 -- # killprocess 3393059 00:04:00.837 04:58:36 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3393059 ']' 00:04:00.837 04:58:36 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3393059 00:04:00.837 04:58:36 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:00.837 04:58:36 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:00.837 04:58:36 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3393059 00:04:00.837 04:58:36 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:00.837 04:58:36 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:00.837 04:58:36 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3393059' 00:04:00.837 killing process with pid 3393059 00:04:00.837 04:58:36 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3393059 00:04:00.837 04:58:36 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3393059 00:04:00.837 spdk_app_start is called in Round 0. 00:04:00.837 Shutdown signal received, stop current app iteration 00:04:00.837 Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 reinitialization... 00:04:00.837 spdk_app_start is called in Round 1. 00:04:00.837 Shutdown signal received, stop current app iteration 00:04:00.837 Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 reinitialization... 00:04:00.837 spdk_app_start is called in Round 2. 00:04:00.837 Shutdown signal received, stop current app iteration 00:04:00.838 Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 reinitialization... 00:04:00.838 spdk_app_start is called in Round 3. 00:04:00.838 Shutdown signal received, stop current app iteration 00:04:00.838 04:58:37 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:00.838 04:58:37 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:00.838 00:04:00.838 real 0m16.242s 00:04:00.838 user 0m35.639s 00:04:00.838 sys 0m2.466s 00:04:00.838 04:58:37 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.838 04:58:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:00.838 ************************************ 00:04:00.838 END TEST app_repeat 00:04:00.838 ************************************ 00:04:00.838 04:58:37 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:00.838 04:58:37 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:00.838 04:58:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.838 04:58:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.838 04:58:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:00.838 ************************************ 00:04:00.838 START TEST cpu_locks 00:04:00.838 ************************************ 00:04:00.838 04:58:37 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:00.838 * Looking for test storage... 00:04:00.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:00.838 04:58:37 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:00.838 04:58:37 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:00.838 04:58:37 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:00.838 04:58:37 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:00.838 04:58:37 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:00.838 04:58:37 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.838 04:58:37 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:00.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.838 --rc genhtml_branch_coverage=1 00:04:00.838 --rc genhtml_function_coverage=1 00:04:00.838 --rc genhtml_legend=1 00:04:00.838 --rc geninfo_all_blocks=1 00:04:00.838 --rc geninfo_unexecuted_blocks=1 00:04:00.838 00:04:00.838 ' 00:04:00.838 04:58:37 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:00.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.838 --rc genhtml_branch_coverage=1 00:04:00.838 --rc genhtml_function_coverage=1 00:04:00.838 --rc genhtml_legend=1 00:04:00.838 --rc geninfo_all_blocks=1 00:04:00.838 --rc geninfo_unexecuted_blocks=1 00:04:00.838 00:04:00.838 ' 00:04:00.838 04:58:37 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:00.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.838 --rc genhtml_branch_coverage=1 00:04:00.838 --rc genhtml_function_coverage=1 00:04:00.838 --rc genhtml_legend=1 00:04:00.838 --rc geninfo_all_blocks=1 00:04:00.838 --rc geninfo_unexecuted_blocks=1 00:04:00.838 00:04:00.838 ' 00:04:00.838 04:58:37 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:00.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.838 --rc genhtml_branch_coverage=1 00:04:00.838 --rc genhtml_function_coverage=1 00:04:00.838 --rc genhtml_legend=1 00:04:00.838 --rc geninfo_all_blocks=1 00:04:00.838 --rc geninfo_unexecuted_blocks=1 00:04:00.838 00:04:00.838 ' 00:04:00.838 04:58:37 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:00.838 04:58:37 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:00.838 04:58:37 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:00.838 04:58:37 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:00.838 04:58:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.838 04:58:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.838 04:58:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:00.838 ************************************ 00:04:00.838 START TEST default_locks 00:04:00.838 ************************************ 00:04:00.838 04:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:00.838 04:58:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3396091 00:04:00.838 04:58:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3396091 00:04:00.838 04:58:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:00.838 04:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3396091 ']' 00:04:00.838 04:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.838 04:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:00.838 04:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.838 04:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:00.838 04:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:00.838 [2024-12-09 04:58:37.458785] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:04:00.838 [2024-12-09 04:58:37.458829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3396091 ] 00:04:01.098 [2024-12-09 04:58:37.524932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.098 [2024-12-09 04:58:37.567623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.356 04:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:01.356 04:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:01.356 04:58:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3396091 00:04:01.356 04:58:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3396091 00:04:01.356 04:58:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:01.615 lslocks: write error 00:04:01.615 04:58:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3396091 00:04:01.615 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3396091 ']' 00:04:01.615 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3396091 00:04:01.615 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:01.615 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:01.615 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3396091 00:04:01.615 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:01.615 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:01.615 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3396091' 00:04:01.615 killing process with pid 3396091 00:04:01.615 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3396091 00:04:01.615 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3396091 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3396091 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3396091 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3396091 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3396091 ']' 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:01.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3396091) - No such process 00:04:01.875 ERROR: process (pid: 3396091) is no longer running 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:01.875 00:04:01.875 real 0m1.043s 00:04:01.875 user 0m0.995s 00:04:01.875 sys 0m0.467s 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.875 04:58:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:01.875 ************************************ 00:04:01.875 END TEST default_locks 00:04:01.875 ************************************ 00:04:01.875 04:58:38 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:01.875 04:58:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.875 04:58:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.875 04:58:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:01.875 ************************************ 00:04:01.875 START TEST default_locks_via_rpc 00:04:01.875 ************************************ 00:04:01.875 04:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:01.875 04:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3396315 00:04:01.875 04:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3396315 00:04:01.875 04:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:01.875 04:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3396315 ']' 00:04:01.875 04:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.875 04:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:01.875 04:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.875 04:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:01.875 04:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.134 [2024-12-09 04:58:38.568429] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:04:02.134 [2024-12-09 04:58:38.568473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3396315 ] 00:04:02.134 [2024-12-09 04:58:38.632514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.134 [2024-12-09 04:58:38.674711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.393 04:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:02.394 04:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:02.394 04:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:02.394 04:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.394 04:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.394 04:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.394 04:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:02.394 04:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:02.394 04:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:02.394 04:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:02.394 04:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:02.394 04:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.394 04:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.394 04:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.394 04:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3396315 00:04:02.394 04:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3396315 00:04:02.394 04:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:02.652 04:58:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3396315 00:04:02.652 04:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3396315 ']' 00:04:02.652 04:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3396315 00:04:02.652 04:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:02.652 04:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:02.652 04:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3396315 00:04:02.911 04:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:02.911 04:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:02.911 04:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3396315' 00:04:02.911 killing process with pid 3396315 00:04:02.911 04:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3396315 00:04:02.911 04:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3396315 00:04:03.170 00:04:03.170 real 0m1.131s 00:04:03.170 user 0m1.104s 00:04:03.170 sys 0m0.490s 00:04:03.170 04:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.170 04:58:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.170 ************************************ 00:04:03.170 END TEST default_locks_via_rpc 00:04:03.170 ************************************ 00:04:03.170 04:58:39 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:03.170 04:58:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.170 04:58:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.170 04:58:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:03.170 ************************************ 00:04:03.170 START TEST non_locking_app_on_locked_coremask 00:04:03.170 ************************************ 00:04:03.170 04:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:03.170 04:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3396572 00:04:03.170 04:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3396572 /var/tmp/spdk.sock 00:04:03.170 04:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:03.170 04:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3396572 ']' 00:04:03.170 04:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.170 04:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.170 04:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.170 04:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.170 04:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:03.170 [2024-12-09 04:58:39.769528] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:04:03.170 [2024-12-09 04:58:39.769572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3396572 ] 00:04:03.429 [2024-12-09 04:58:39.831486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.429 [2024-12-09 04:58:39.874652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.688 04:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.688 04:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:03.688 04:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3396598 00:04:03.688 04:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3396598 /var/tmp/spdk2.sock 00:04:03.688 04:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:03.688 04:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3396598 ']' 00:04:03.688 04:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:03.688 04:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.688 04:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:03.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:03.688 04:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.688 04:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:03.688 [2024-12-09 04:58:40.154328] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:04:03.688 [2024-12-09 04:58:40.154379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3396598 ] 00:04:03.688 [2024-12-09 04:58:40.250694] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:03.688 [2024-12-09 04:58:40.250724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.948 [2024-12-09 04:58:40.344646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.516 04:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:04.516 04:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:04.516 04:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3396572 00:04:04.516 04:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3396572 00:04:04.516 04:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:04.776 lslocks: write error 00:04:04.776 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3396572 00:04:04.776 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3396572 ']' 00:04:04.776 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3396572 00:04:04.776 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:04.776 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:04.776 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3396572 00:04:04.776 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:04.776 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:04.776 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3396572' 00:04:04.776 killing process with pid 3396572 00:04:04.776 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3396572 00:04:04.776 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3396572 00:04:05.346 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3396598 00:04:05.346 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3396598 ']' 00:04:05.346 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3396598 00:04:05.346 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:05.346 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:05.346 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3396598 00:04:05.346 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:05.346 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:05.346 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3396598' 00:04:05.346 killing process with pid 3396598 00:04:05.346 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3396598 00:04:05.346 04:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3396598 00:04:05.915 00:04:05.915 real 0m2.599s 00:04:05.915 user 0m2.746s 00:04:05.915 sys 0m0.813s 00:04:05.915 04:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.915 04:58:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:05.915 ************************************ 00:04:05.915 END TEST non_locking_app_on_locked_coremask 00:04:05.915 ************************************ 00:04:05.915 04:58:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:05.915 04:58:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.915 04:58:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.915 04:58:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:05.915 ************************************ 00:04:05.915 START TEST locking_app_on_unlocked_coremask 00:04:05.915 ************************************ 00:04:05.915 04:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:05.915 04:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3397073 00:04:05.915 04:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3397073 /var/tmp/spdk.sock 00:04:05.915 04:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:05.915 04:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3397073 ']' 00:04:05.915 04:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.915 04:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:05.916 04:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.916 04:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:05.916 04:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:05.916 [2024-12-09 04:58:42.436098] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:04:05.916 [2024-12-09 04:58:42.436142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3397073 ] 00:04:05.916 [2024-12-09 04:58:42.500369] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:05.916 [2024-12-09 04:58:42.500398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.916 [2024-12-09 04:58:42.538148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.175 04:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.175 04:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:06.175 04:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3397076 00:04:06.175 04:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3397076 /var/tmp/spdk2.sock 00:04:06.175 04:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:06.175 04:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3397076 ']' 00:04:06.175 04:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:06.175 04:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.175 04:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:06.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:06.175 04:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.175 04:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:06.175 [2024-12-09 04:58:42.795912] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:04:06.175 [2024-12-09 04:58:42.795957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3397076 ] 00:04:06.435 [2024-12-09 04:58:42.890957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.435 [2024-12-09 04:58:42.974185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.004 04:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:07.004 04:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:07.004 04:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3397076 00:04:07.004 04:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3397076 00:04:07.004 04:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:07.943 lslocks: write error 00:04:07.943 04:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3397073 00:04:07.943 04:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3397073 ']' 00:04:07.943 04:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3397073 00:04:07.943 04:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:07.943 04:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.943 04:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3397073 00:04:07.943 04:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:07.943 04:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:07.943 04:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3397073' 00:04:07.943 killing process with pid 3397073 00:04:07.943 04:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3397073 00:04:07.943 04:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3397073 00:04:08.512 04:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3397076 00:04:08.512 04:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3397076 ']' 00:04:08.512 04:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3397076 00:04:08.512 04:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:08.512 04:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:08.512 04:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3397076 00:04:08.512 04:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:08.512 04:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:08.512 04:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3397076' 00:04:08.512 killing process with pid 3397076 00:04:08.512 04:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3397076 00:04:08.512 04:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3397076 00:04:08.771 00:04:08.771 real 0m3.007s 00:04:08.771 user 0m3.148s 00:04:08.771 sys 0m0.992s 00:04:08.771 04:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.771 04:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:08.771 ************************************ 00:04:08.771 END TEST locking_app_on_unlocked_coremask 00:04:08.771 ************************************ 00:04:09.030 04:58:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:09.030 04:58:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.030 04:58:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.030 04:58:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:09.030 ************************************ 00:04:09.030 START TEST locking_app_on_locked_coremask 00:04:09.030 ************************************ 00:04:09.030 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:09.030 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3397573 00:04:09.030 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3397573 /var/tmp/spdk.sock 00:04:09.030 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:09.030 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3397573 ']' 00:04:09.030 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.030 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:09.030 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.030 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:09.030 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:09.030 [2024-12-09 04:58:45.516472] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:04:09.030 [2024-12-09 04:58:45.516516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3397573 ] 00:04:09.030 [2024-12-09 04:58:45.579888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.030 [2024-12-09 04:58:45.622187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.290 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:09.290 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:09.290 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3397738 00:04:09.290 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:09.290 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3397738 /var/tmp/spdk2.sock 00:04:09.290 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:09.290 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3397738 /var/tmp/spdk2.sock 00:04:09.290 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:09.290 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:09.290 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:09.290 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:09.290 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3397738 /var/tmp/spdk2.sock 00:04:09.290 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3397738 ']' 00:04:09.290 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:09.290 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:09.290 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:09.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:09.290 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:09.290 04:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:09.290 [2024-12-09 04:58:45.880670] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:04:09.290 [2024-12-09 04:58:45.880719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3397738 ] 00:04:09.548 [2024-12-09 04:58:45.973293] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3397573 has claimed it. 00:04:09.548 [2024-12-09 04:58:45.973335] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:10.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3397738) - No such process 00:04:10.115 ERROR: process (pid: 3397738) is no longer running 00:04:10.115 04:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:10.115 04:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:10.115 04:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:10.115 04:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:10.115 04:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:10.116 04:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:10.116 04:58:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3397573 00:04:10.116 04:58:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3397573 00:04:10.116 04:58:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:10.374 lslocks: write error 00:04:10.374 04:58:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3397573 00:04:10.374 04:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3397573 ']' 00:04:10.374 04:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3397573 00:04:10.374 04:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:10.632 04:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:10.632 04:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3397573 00:04:10.632 04:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:10.632 04:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:10.632 04:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3397573' 00:04:10.632 killing process with pid 3397573 00:04:10.632 04:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3397573 00:04:10.632 04:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3397573 00:04:10.891 00:04:10.891 real 0m1.948s 00:04:10.891 user 0m2.092s 00:04:10.891 sys 0m0.641s 00:04:10.891 04:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.891 04:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:10.891 ************************************ 00:04:10.891 END TEST locking_app_on_locked_coremask 00:04:10.891 ************************************ 00:04:10.891 04:58:47 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:10.891 04:58:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.891 04:58:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.891 04:58:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:10.891 ************************************ 00:04:10.891 START TEST locking_overlapped_coremask 00:04:10.891 ************************************ 00:04:10.891 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:10.891 04:58:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3398058 00:04:10.891 04:58:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3398058 /var/tmp/spdk.sock 00:04:10.891 04:58:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:10.891 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3398058 ']' 00:04:10.891 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.891 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:10.891 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.891 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:10.891 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:10.891 [2024-12-09 04:58:47.532529] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:04:10.891 [2024-12-09 04:58:47.532568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398058 ] 00:04:11.150 [2024-12-09 04:58:47.596863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:11.150 [2024-12-09 04:58:47.641845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.150 [2024-12-09 04:58:47.641928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:11.150 [2024-12-09 04:58:47.641929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.409 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:11.409 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:11.409 04:58:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3398068 00:04:11.409 04:58:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3398068 /var/tmp/spdk2.sock 00:04:11.409 04:58:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:11.409 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:11.409 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3398068 /var/tmp/spdk2.sock 00:04:11.409 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:11.409 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:11.409 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:11.409 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:11.409 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3398068 /var/tmp/spdk2.sock 00:04:11.409 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3398068 ']' 00:04:11.409 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:11.409 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:11.409 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:11.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:11.409 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:11.409 04:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:11.409 [2024-12-09 04:58:47.904401] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:04:11.409 [2024-12-09 04:58:47.904446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398068 ] 00:04:11.409 [2024-12-09 04:58:48.004126] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3398058 has claimed it. 00:04:11.409 [2024-12-09 04:58:48.004170] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:11.976 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3398068) - No such process 00:04:11.976 ERROR: process (pid: 3398068) is no longer running 00:04:11.976 04:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:11.976 04:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:11.976 04:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:11.976 04:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:11.976 04:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:11.976 04:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:11.976 04:58:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:11.976 04:58:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:11.976 04:58:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:11.976 04:58:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:11.976 04:58:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3398058 00:04:11.976 04:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3398058 ']' 00:04:11.976 04:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3398058 00:04:11.976 04:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:11.976 04:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:11.976 04:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3398058 00:04:11.976 04:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:11.976 04:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:11.976 04:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3398058' 00:04:11.976 killing process with pid 3398058 00:04:11.976 04:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3398058 00:04:11.976 04:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3398058 00:04:12.543 00:04:12.543 real 0m1.465s 00:04:12.543 user 0m3.966s 00:04:12.543 sys 0m0.405s 00:04:12.543 04:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.543 04:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:12.543 ************************************ 00:04:12.543 END TEST locking_overlapped_coremask 00:04:12.543 ************************************ 00:04:12.543 04:58:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:12.543 04:58:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.543 04:58:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.543 04:58:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:12.543 ************************************ 00:04:12.543 START TEST locking_overlapped_coremask_via_rpc 00:04:12.543 ************************************ 00:04:12.543 04:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:12.543 04:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3398323 00:04:12.543 04:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3398323 /var/tmp/spdk.sock 00:04:12.543 04:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:12.543 04:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3398323 ']' 00:04:12.543 04:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.543 04:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:12.543 04:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.543 04:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:12.543 04:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.543 [2024-12-09 04:58:49.065151] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:04:12.543 [2024-12-09 04:58:49.065196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398323 ] 00:04:12.543 [2024-12-09 04:58:49.130033] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:12.543 [2024-12-09 04:58:49.130062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:12.543 [2024-12-09 04:58:49.173965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:12.543 [2024-12-09 04:58:49.174076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:12.543 [2024-12-09 04:58:49.174079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.802 04:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:12.802 04:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:12.802 04:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3398338 00:04:12.802 04:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3398338 /var/tmp/spdk2.sock 00:04:12.802 04:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:12.802 04:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3398338 ']' 00:04:12.802 04:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:12.802 04:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:12.802 04:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:12.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:12.802 04:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:12.802 04:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.802 [2024-12-09 04:58:49.440527] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:04:12.802 [2024-12-09 04:58:49.440571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398338 ] 00:04:13.061 [2024-12-09 04:58:49.535689] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:13.061 [2024-12-09 04:58:49.535719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:13.061 [2024-12-09 04:58:49.623470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:13.061 [2024-12-09 04:58:49.627094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:13.061 [2024-12-09 04:58:49.627095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:13.629 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.629 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:13.629 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:13.629 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.629 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.889 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.889 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:13.889 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:13.889 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:13.889 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:13.889 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.889 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:13.889 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.889 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:13.889 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.889 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.889 [2024-12-09 04:58:50.287069] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3398323 has claimed it. 00:04:13.889 request: 00:04:13.889 { 00:04:13.889 "method": "framework_enable_cpumask_locks", 00:04:13.889 "req_id": 1 00:04:13.889 } 00:04:13.889 Got JSON-RPC error response 00:04:13.889 response: 00:04:13.889 { 00:04:13.889 "code": -32603, 00:04:13.889 "message": "Failed to claim CPU core: 2" 00:04:13.889 } 00:04:13.889 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:13.889 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:13.889 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:13.890 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:13.890 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:13.890 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3398323 /var/tmp/spdk.sock 00:04:13.890 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3398323 ']' 00:04:13.890 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.890 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.890 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.890 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.890 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.890 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.890 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:13.890 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3398338 /var/tmp/spdk2.sock 00:04:13.890 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3398338 ']' 00:04:13.890 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:13.890 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.890 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:13.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:13.890 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.890 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.149 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:14.149 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:14.149 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:14.149 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:14.149 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:14.149 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:14.149 00:04:14.149 real 0m1.672s 00:04:14.149 user 0m0.779s 00:04:14.149 sys 0m0.159s 00:04:14.149 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.149 04:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.149 ************************************ 00:04:14.149 END TEST locking_overlapped_coremask_via_rpc 00:04:14.149 ************************************ 00:04:14.149 04:58:50 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:14.149 04:58:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3398323 ]] 00:04:14.149 04:58:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3398323 00:04:14.149 04:58:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3398323 ']' 00:04:14.149 04:58:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3398323 00:04:14.149 04:58:50 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:14.149 04:58:50 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.149 04:58:50 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3398323 00:04:14.149 04:58:50 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.149 04:58:50 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.149 04:58:50 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3398323' 00:04:14.149 killing process with pid 3398323 00:04:14.149 04:58:50 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3398323 00:04:14.149 04:58:50 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3398323 00:04:14.718 04:58:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3398338 ]] 00:04:14.718 04:58:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3398338 00:04:14.718 04:58:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3398338 ']' 00:04:14.718 04:58:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3398338 00:04:14.718 04:58:51 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:14.718 04:58:51 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.718 04:58:51 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3398338 00:04:14.718 04:58:51 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:14.718 04:58:51 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:14.718 04:58:51 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3398338' 00:04:14.718 killing process with pid 3398338 00:04:14.718 04:58:51 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3398338 00:04:14.718 04:58:51 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3398338 00:04:14.978 04:58:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:14.978 04:58:51 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:14.978 04:58:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3398323 ]] 00:04:14.978 04:58:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3398323 00:04:14.978 04:58:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3398323 ']' 00:04:14.978 04:58:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3398323 00:04:14.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3398323) - No such process 00:04:14.978 04:58:51 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3398323 is not found' 00:04:14.978 Process with pid 3398323 is not found 00:04:14.978 04:58:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3398338 ]] 00:04:14.978 04:58:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3398338 00:04:14.978 04:58:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3398338 ']' 00:04:14.978 04:58:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3398338 00:04:14.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3398338) - No such process 00:04:14.978 04:58:51 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3398338 is not found' 00:04:14.978 Process with pid 3398338 is not found 00:04:14.978 04:58:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:14.978 00:04:14.978 real 0m14.337s 00:04:14.978 user 0m24.676s 00:04:14.978 sys 0m4.921s 00:04:14.978 04:58:51 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.978 04:58:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:14.978 ************************************ 00:04:14.978 END TEST cpu_locks 00:04:14.978 ************************************ 00:04:14.978 00:04:14.978 real 0m38.947s 00:04:14.978 user 1m13.722s 00:04:14.978 sys 0m8.285s 00:04:14.978 04:58:51 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.978 04:58:51 event -- common/autotest_common.sh@10 -- # set +x 00:04:14.978 ************************************ 00:04:14.978 END TEST event 00:04:14.978 ************************************ 00:04:14.978 04:58:51 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:14.978 04:58:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.978 04:58:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.978 04:58:51 -- common/autotest_common.sh@10 -- # set +x 00:04:15.238 ************************************ 00:04:15.238 START TEST thread 00:04:15.238 ************************************ 00:04:15.238 04:58:51 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:15.238 * Looking for test storage... 00:04:15.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:15.238 04:58:51 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:15.238 04:58:51 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:04:15.238 04:58:51 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:15.238 04:58:51 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:15.238 04:58:51 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.238 04:58:51 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.238 04:58:51 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.238 04:58:51 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.238 04:58:51 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.238 04:58:51 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.238 04:58:51 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.238 04:58:51 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.238 04:58:51 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.238 04:58:51 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.238 04:58:51 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.238 04:58:51 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:15.238 04:58:51 thread -- scripts/common.sh@345 -- # : 1 00:04:15.238 04:58:51 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.238 04:58:51 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.238 04:58:51 thread -- scripts/common.sh@365 -- # decimal 1 00:04:15.238 04:58:51 thread -- scripts/common.sh@353 -- # local d=1 00:04:15.238 04:58:51 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.238 04:58:51 thread -- scripts/common.sh@355 -- # echo 1 00:04:15.238 04:58:51 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.238 04:58:51 thread -- scripts/common.sh@366 -- # decimal 2 00:04:15.238 04:58:51 thread -- scripts/common.sh@353 -- # local d=2 00:04:15.238 04:58:51 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.238 04:58:51 thread -- scripts/common.sh@355 -- # echo 2 00:04:15.238 04:58:51 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.238 04:58:51 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.238 04:58:51 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.238 04:58:51 thread -- scripts/common.sh@368 -- # return 0 00:04:15.238 04:58:51 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.239 04:58:51 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:15.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.239 --rc genhtml_branch_coverage=1 00:04:15.239 --rc genhtml_function_coverage=1 00:04:15.239 --rc genhtml_legend=1 00:04:15.239 --rc geninfo_all_blocks=1 00:04:15.239 --rc geninfo_unexecuted_blocks=1 00:04:15.239 00:04:15.239 ' 00:04:15.239 04:58:51 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:15.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.239 --rc genhtml_branch_coverage=1 00:04:15.239 --rc genhtml_function_coverage=1 00:04:15.239 --rc genhtml_legend=1 00:04:15.239 --rc geninfo_all_blocks=1 00:04:15.239 --rc geninfo_unexecuted_blocks=1 00:04:15.239 00:04:15.239 ' 00:04:15.239 04:58:51 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:15.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.239 --rc genhtml_branch_coverage=1 00:04:15.239 --rc genhtml_function_coverage=1 00:04:15.239 --rc genhtml_legend=1 00:04:15.239 --rc geninfo_all_blocks=1 00:04:15.239 --rc geninfo_unexecuted_blocks=1 00:04:15.239 00:04:15.239 ' 00:04:15.239 04:58:51 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:15.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.239 --rc genhtml_branch_coverage=1 00:04:15.239 --rc genhtml_function_coverage=1 00:04:15.239 --rc genhtml_legend=1 00:04:15.239 --rc geninfo_all_blocks=1 00:04:15.239 --rc geninfo_unexecuted_blocks=1 00:04:15.239 00:04:15.239 ' 00:04:15.239 04:58:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:15.239 04:58:51 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:15.239 04:58:51 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.239 04:58:51 thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.239 ************************************ 00:04:15.239 START TEST thread_poller_perf 00:04:15.239 ************************************ 00:04:15.239 04:58:51 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:15.239 [2024-12-09 04:58:51.858630] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:04:15.239 [2024-12-09 04:58:51.858690] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398899 ] 00:04:15.498 [2024-12-09 04:58:51.925466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.498 [2024-12-09 04:58:51.965218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.498 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:16.462 [2024-12-09T03:58:53.108Z] ====================================== 00:04:16.462 [2024-12-09T03:58:53.108Z] busy:2305472570 (cyc) 00:04:16.462 [2024-12-09T03:58:53.108Z] total_run_count: 413000 00:04:16.462 [2024-12-09T03:58:53.108Z] tsc_hz: 2300000000 (cyc) 00:04:16.462 [2024-12-09T03:58:53.108Z] ====================================== 00:04:16.462 [2024-12-09T03:58:53.108Z] poller_cost: 5582 (cyc), 2426 (nsec) 00:04:16.462 00:04:16.462 real 0m1.204s 00:04:16.462 user 0m1.130s 00:04:16.462 sys 0m0.070s 00:04:16.462 04:58:53 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.462 04:58:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:16.462 ************************************ 00:04:16.462 END TEST thread_poller_perf 00:04:16.462 ************************************ 00:04:16.462 04:58:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:16.462 04:58:53 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:16.462 04:58:53 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.462 04:58:53 thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.721 ************************************ 00:04:16.721 START TEST thread_poller_perf 00:04:16.721 ************************************ 00:04:16.721 04:58:53 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:16.721 [2024-12-09 04:58:53.133259] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:04:16.721 [2024-12-09 04:58:53.133326] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3399147 ] 00:04:16.721 [2024-12-09 04:58:53.200764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.721 [2024-12-09 04:58:53.241274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.721 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:18.101 [2024-12-09T03:58:54.747Z] ====================================== 00:04:18.101 [2024-12-09T03:58:54.747Z] busy:2301646912 (cyc) 00:04:18.101 [2024-12-09T03:58:54.747Z] total_run_count: 5418000 00:04:18.101 [2024-12-09T03:58:54.747Z] tsc_hz: 2300000000 (cyc) 00:04:18.101 [2024-12-09T03:58:54.747Z] ====================================== 00:04:18.101 [2024-12-09T03:58:54.747Z] poller_cost: 424 (cyc), 184 (nsec) 00:04:18.101 00:04:18.101 real 0m1.204s 00:04:18.101 user 0m1.133s 00:04:18.101 sys 0m0.067s 00:04:18.101 04:58:54 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.101 04:58:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:18.101 ************************************ 00:04:18.101 END TEST thread_poller_perf 00:04:18.101 ************************************ 00:04:18.101 04:58:54 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:18.101 00:04:18.101 real 0m2.719s 00:04:18.101 user 0m2.422s 00:04:18.101 sys 0m0.311s 00:04:18.101 04:58:54 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.101 04:58:54 thread -- common/autotest_common.sh@10 -- # set +x 00:04:18.101 ************************************ 00:04:18.101 END TEST thread 00:04:18.101 ************************************ 00:04:18.101 04:58:54 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:18.101 04:58:54 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:18.101 04:58:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.101 04:58:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.101 04:58:54 -- common/autotest_common.sh@10 -- # set +x 00:04:18.101 ************************************ 00:04:18.101 START TEST app_cmdline 00:04:18.101 ************************************ 00:04:18.101 04:58:54 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:18.101 * Looking for test storage... 00:04:18.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:18.101 04:58:54 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:18.101 04:58:54 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:04:18.101 04:58:54 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:18.101 04:58:54 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.101 04:58:54 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:18.101 04:58:54 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.101 04:58:54 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:18.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.101 --rc genhtml_branch_coverage=1 00:04:18.101 --rc genhtml_function_coverage=1 00:04:18.101 --rc genhtml_legend=1 00:04:18.101 --rc geninfo_all_blocks=1 00:04:18.101 --rc geninfo_unexecuted_blocks=1 00:04:18.101 00:04:18.101 ' 00:04:18.101 04:58:54 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:18.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.101 --rc genhtml_branch_coverage=1 00:04:18.101 --rc genhtml_function_coverage=1 00:04:18.101 --rc genhtml_legend=1 00:04:18.101 --rc geninfo_all_blocks=1 00:04:18.101 --rc geninfo_unexecuted_blocks=1 00:04:18.101 00:04:18.101 ' 00:04:18.101 04:58:54 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:18.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.101 --rc genhtml_branch_coverage=1 00:04:18.101 --rc genhtml_function_coverage=1 00:04:18.101 --rc genhtml_legend=1 00:04:18.101 --rc geninfo_all_blocks=1 00:04:18.101 --rc geninfo_unexecuted_blocks=1 00:04:18.101 00:04:18.101 ' 00:04:18.101 04:58:54 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:18.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.101 --rc genhtml_branch_coverage=1 00:04:18.101 --rc genhtml_function_coverage=1 00:04:18.101 --rc genhtml_legend=1 00:04:18.101 --rc geninfo_all_blocks=1 00:04:18.101 --rc geninfo_unexecuted_blocks=1 00:04:18.101 00:04:18.101 ' 00:04:18.101 04:58:54 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:18.101 04:58:54 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3399446 00:04:18.101 04:58:54 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3399446 00:04:18.101 04:58:54 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3399446 ']' 00:04:18.101 04:58:54 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.101 04:58:54 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.101 04:58:54 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.101 04:58:54 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:18.101 04:58:54 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.101 04:58:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:18.101 [2024-12-09 04:58:54.646384] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:04:18.101 [2024-12-09 04:58:54.646432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3399446 ] 00:04:18.101 [2024-12-09 04:58:54.710124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.360 [2024-12-09 04:58:54.753343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.360 04:58:54 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.360 04:58:54 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:18.360 04:58:54 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:18.623 { 00:04:18.623 "version": "SPDK v25.01-pre git sha1 421ce3854", 00:04:18.623 "fields": { 00:04:18.623 "major": 25, 00:04:18.623 "minor": 1, 00:04:18.623 "patch": 0, 00:04:18.623 "suffix": "-pre", 00:04:18.623 "commit": "421ce3854" 00:04:18.623 } 00:04:18.623 } 00:04:18.623 04:58:55 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:18.623 04:58:55 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:18.623 04:58:55 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:18.623 04:58:55 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:18.623 04:58:55 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:18.623 04:58:55 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:18.623 04:58:55 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:18.623 04:58:55 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.623 04:58:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:18.623 04:58:55 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.623 04:58:55 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:18.623 04:58:55 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:18.623 04:58:55 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:18.623 04:58:55 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:18.623 04:58:55 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:18.623 04:58:55 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:18.623 04:58:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.623 04:58:55 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:18.623 04:58:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.623 04:58:55 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:18.623 04:58:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.624 04:58:55 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:18.624 04:58:55 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:18.624 04:58:55 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:18.882 request: 00:04:18.882 { 00:04:18.882 "method": "env_dpdk_get_mem_stats", 00:04:18.882 "req_id": 1 00:04:18.882 } 00:04:18.882 Got JSON-RPC error response 00:04:18.882 response: 00:04:18.882 { 00:04:18.882 "code": -32601, 00:04:18.882 "message": "Method not found" 00:04:18.882 } 00:04:18.882 04:58:55 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:18.882 04:58:55 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:18.882 04:58:55 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:18.882 04:58:55 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:18.882 04:58:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3399446 00:04:18.882 04:58:55 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3399446 ']' 00:04:18.882 04:58:55 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3399446 00:04:18.882 04:58:55 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:18.882 04:58:55 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.882 04:58:55 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3399446 00:04:18.882 04:58:55 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.882 04:58:55 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.882 04:58:55 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3399446' 00:04:18.882 killing process with pid 3399446 00:04:18.882 04:58:55 app_cmdline -- common/autotest_common.sh@973 -- # kill 3399446 00:04:18.882 04:58:55 app_cmdline -- common/autotest_common.sh@978 -- # wait 3399446 00:04:19.141 00:04:19.141 real 0m1.357s 00:04:19.141 user 0m1.605s 00:04:19.141 sys 0m0.421s 00:04:19.141 04:58:55 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.141 04:58:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:19.141 ************************************ 00:04:19.141 END TEST app_cmdline 00:04:19.141 ************************************ 00:04:19.400 04:58:55 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:19.400 04:58:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.400 04:58:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.400 04:58:55 -- common/autotest_common.sh@10 -- # set +x 00:04:19.400 ************************************ 00:04:19.400 START TEST version 00:04:19.400 ************************************ 00:04:19.400 04:58:55 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:19.400 * Looking for test storage... 00:04:19.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:19.400 04:58:55 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:19.400 04:58:55 version -- common/autotest_common.sh@1693 -- # lcov --version 00:04:19.400 04:58:55 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:19.400 04:58:55 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:19.400 04:58:55 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.400 04:58:55 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.400 04:58:55 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.400 04:58:55 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.400 04:58:55 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.400 04:58:55 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.400 04:58:55 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.400 04:58:55 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.400 04:58:55 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.400 04:58:55 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.400 04:58:55 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.400 04:58:55 version -- scripts/common.sh@344 -- # case "$op" in 00:04:19.400 04:58:55 version -- scripts/common.sh@345 -- # : 1 00:04:19.400 04:58:55 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.401 04:58:55 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.401 04:58:55 version -- scripts/common.sh@365 -- # decimal 1 00:04:19.401 04:58:55 version -- scripts/common.sh@353 -- # local d=1 00:04:19.401 04:58:55 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.401 04:58:55 version -- scripts/common.sh@355 -- # echo 1 00:04:19.401 04:58:55 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.401 04:58:55 version -- scripts/common.sh@366 -- # decimal 2 00:04:19.401 04:58:55 version -- scripts/common.sh@353 -- # local d=2 00:04:19.401 04:58:55 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.401 04:58:55 version -- scripts/common.sh@355 -- # echo 2 00:04:19.401 04:58:55 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.401 04:58:55 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.401 04:58:55 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.401 04:58:55 version -- scripts/common.sh@368 -- # return 0 00:04:19.401 04:58:55 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.401 04:58:55 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:19.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.401 --rc genhtml_branch_coverage=1 00:04:19.401 --rc genhtml_function_coverage=1 00:04:19.401 --rc genhtml_legend=1 00:04:19.401 --rc geninfo_all_blocks=1 00:04:19.401 --rc geninfo_unexecuted_blocks=1 00:04:19.401 00:04:19.401 ' 00:04:19.401 04:58:55 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:19.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.401 --rc genhtml_branch_coverage=1 00:04:19.401 --rc genhtml_function_coverage=1 00:04:19.401 --rc genhtml_legend=1 00:04:19.401 --rc geninfo_all_blocks=1 00:04:19.401 --rc geninfo_unexecuted_blocks=1 00:04:19.401 00:04:19.401 ' 00:04:19.401 04:58:55 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:19.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.401 --rc genhtml_branch_coverage=1 00:04:19.401 --rc genhtml_function_coverage=1 00:04:19.401 --rc genhtml_legend=1 00:04:19.401 --rc geninfo_all_blocks=1 00:04:19.401 --rc geninfo_unexecuted_blocks=1 00:04:19.401 00:04:19.401 ' 00:04:19.401 04:58:55 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:19.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.401 --rc genhtml_branch_coverage=1 00:04:19.401 --rc genhtml_function_coverage=1 00:04:19.401 --rc genhtml_legend=1 00:04:19.401 --rc geninfo_all_blocks=1 00:04:19.401 --rc geninfo_unexecuted_blocks=1 00:04:19.401 00:04:19.401 ' 00:04:19.401 04:58:55 version -- app/version.sh@17 -- # get_header_version major 00:04:19.401 04:58:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:19.401 04:58:55 version -- app/version.sh@14 -- # cut -f2 00:04:19.401 04:58:55 version -- app/version.sh@14 -- # tr -d '"' 00:04:19.401 04:58:56 version -- app/version.sh@17 -- # major=25 00:04:19.401 04:58:56 version -- app/version.sh@18 -- # get_header_version minor 00:04:19.401 04:58:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:19.401 04:58:56 version -- app/version.sh@14 -- # cut -f2 00:04:19.401 04:58:56 version -- app/version.sh@14 -- # tr -d '"' 00:04:19.401 04:58:56 version -- app/version.sh@18 -- # minor=1 00:04:19.401 04:58:56 version -- app/version.sh@19 -- # get_header_version patch 00:04:19.401 04:58:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:19.401 04:58:56 version -- app/version.sh@14 -- # cut -f2 00:04:19.401 04:58:56 version -- app/version.sh@14 -- # tr -d '"' 00:04:19.401 04:58:56 version -- app/version.sh@19 -- # patch=0 00:04:19.401 04:58:56 version -- app/version.sh@20 -- # get_header_version suffix 00:04:19.401 04:58:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:19.401 04:58:56 version -- app/version.sh@14 -- # cut -f2 00:04:19.401 04:58:56 version -- app/version.sh@14 -- # tr -d '"' 00:04:19.401 04:58:56 version -- app/version.sh@20 -- # suffix=-pre 00:04:19.401 04:58:56 version -- app/version.sh@22 -- # version=25.1 00:04:19.401 04:58:56 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:19.401 04:58:56 version -- app/version.sh@28 -- # version=25.1rc0 00:04:19.401 04:58:56 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:19.401 04:58:56 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:19.660 04:58:56 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:19.660 04:58:56 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:19.660 00:04:19.660 real 0m0.219s 00:04:19.660 user 0m0.145s 00:04:19.660 sys 0m0.114s 00:04:19.660 04:58:56 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.660 04:58:56 version -- common/autotest_common.sh@10 -- # set +x 00:04:19.660 ************************************ 00:04:19.660 END TEST version 00:04:19.660 ************************************ 00:04:19.660 04:58:56 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:19.660 04:58:56 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:19.660 04:58:56 -- spdk/autotest.sh@194 -- # uname -s 00:04:19.660 04:58:56 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:19.660 04:58:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:19.660 04:58:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:19.660 04:58:56 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:19.660 04:58:56 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:19.660 04:58:56 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:19.660 04:58:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:19.660 04:58:56 -- common/autotest_common.sh@10 -- # set +x 00:04:19.660 04:58:56 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:19.660 04:58:56 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:19.660 04:58:56 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:19.660 04:58:56 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:19.660 04:58:56 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:04:19.660 04:58:56 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:04:19.660 04:58:56 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:19.660 04:58:56 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:19.660 04:58:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.660 04:58:56 -- common/autotest_common.sh@10 -- # set +x 00:04:19.660 ************************************ 00:04:19.660 START TEST nvmf_tcp 00:04:19.660 ************************************ 00:04:19.660 04:58:56 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:19.660 * Looking for test storage... 00:04:19.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:19.660 04:58:56 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:19.660 04:58:56 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:19.660 04:58:56 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:19.920 04:58:56 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.920 04:58:56 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:19.920 04:58:56 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.920 04:58:56 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:19.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.920 --rc genhtml_branch_coverage=1 00:04:19.920 --rc genhtml_function_coverage=1 00:04:19.920 --rc genhtml_legend=1 00:04:19.920 --rc geninfo_all_blocks=1 00:04:19.920 --rc geninfo_unexecuted_blocks=1 00:04:19.920 00:04:19.920 ' 00:04:19.920 04:58:56 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:19.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.920 --rc genhtml_branch_coverage=1 00:04:19.920 --rc genhtml_function_coverage=1 00:04:19.920 --rc genhtml_legend=1 00:04:19.920 --rc geninfo_all_blocks=1 00:04:19.920 --rc geninfo_unexecuted_blocks=1 00:04:19.920 00:04:19.920 ' 00:04:19.920 04:58:56 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:19.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.920 --rc genhtml_branch_coverage=1 00:04:19.920 --rc genhtml_function_coverage=1 00:04:19.920 --rc genhtml_legend=1 00:04:19.920 --rc geninfo_all_blocks=1 00:04:19.920 --rc geninfo_unexecuted_blocks=1 00:04:19.920 00:04:19.920 ' 00:04:19.920 04:58:56 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:19.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.920 --rc genhtml_branch_coverage=1 00:04:19.920 --rc genhtml_function_coverage=1 00:04:19.920 --rc genhtml_legend=1 00:04:19.920 --rc geninfo_all_blocks=1 00:04:19.920 --rc geninfo_unexecuted_blocks=1 00:04:19.920 00:04:19.920 ' 00:04:19.920 04:58:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:19.920 04:58:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:19.920 04:58:56 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:19.920 04:58:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:19.920 04:58:56 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.920 04:58:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:19.920 ************************************ 00:04:19.920 START TEST nvmf_target_core 00:04:19.920 ************************************ 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:19.920 * Looking for test storage... 00:04:19.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:19.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.920 --rc genhtml_branch_coverage=1 00:04:19.920 --rc genhtml_function_coverage=1 00:04:19.920 --rc genhtml_legend=1 00:04:19.920 --rc geninfo_all_blocks=1 00:04:19.920 --rc geninfo_unexecuted_blocks=1 00:04:19.920 00:04:19.920 ' 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:19.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.920 --rc genhtml_branch_coverage=1 00:04:19.920 --rc genhtml_function_coverage=1 00:04:19.920 --rc genhtml_legend=1 00:04:19.920 --rc geninfo_all_blocks=1 00:04:19.920 --rc geninfo_unexecuted_blocks=1 00:04:19.920 00:04:19.920 ' 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:19.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.920 --rc genhtml_branch_coverage=1 00:04:19.920 --rc genhtml_function_coverage=1 00:04:19.920 --rc genhtml_legend=1 00:04:19.920 --rc geninfo_all_blocks=1 00:04:19.920 --rc geninfo_unexecuted_blocks=1 00:04:19.920 00:04:19.920 ' 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:19.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.920 --rc genhtml_branch_coverage=1 00:04:19.920 --rc genhtml_function_coverage=1 00:04:19.920 --rc genhtml_legend=1 00:04:19.920 --rc geninfo_all_blocks=1 00:04:19.920 --rc geninfo_unexecuted_blocks=1 00:04:19.920 00:04:19.920 ' 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:19.920 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:19.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.921 04:58:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:20.180 ************************************ 00:04:20.180 START TEST nvmf_abort 00:04:20.180 ************************************ 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:20.181 * Looking for test storage... 00:04:20.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:20.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.181 --rc genhtml_branch_coverage=1 00:04:20.181 --rc genhtml_function_coverage=1 00:04:20.181 --rc genhtml_legend=1 00:04:20.181 --rc geninfo_all_blocks=1 00:04:20.181 --rc geninfo_unexecuted_blocks=1 00:04:20.181 00:04:20.181 ' 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:20.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.181 --rc genhtml_branch_coverage=1 00:04:20.181 --rc genhtml_function_coverage=1 00:04:20.181 --rc genhtml_legend=1 00:04:20.181 --rc geninfo_all_blocks=1 00:04:20.181 --rc geninfo_unexecuted_blocks=1 00:04:20.181 00:04:20.181 ' 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:20.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.181 --rc genhtml_branch_coverage=1 00:04:20.181 --rc genhtml_function_coverage=1 00:04:20.181 --rc genhtml_legend=1 00:04:20.181 --rc geninfo_all_blocks=1 00:04:20.181 --rc geninfo_unexecuted_blocks=1 00:04:20.181 00:04:20.181 ' 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:20.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.181 --rc genhtml_branch_coverage=1 00:04:20.181 --rc genhtml_function_coverage=1 00:04:20.181 --rc genhtml_legend=1 00:04:20.181 --rc geninfo_all_blocks=1 00:04:20.181 --rc geninfo_unexecuted_blocks=1 00:04:20.181 00:04:20.181 ' 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:20.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:20.181 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:20.182 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:20.182 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:20.182 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:20.182 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:20.182 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:20.182 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:20.182 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:20.182 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:20.182 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:20.182 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:20.182 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:20.182 04:58:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:04:25.471 Found 0000:86:00.0 (0x8086 - 0x159b) 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:04:25.471 Found 0000:86:00.1 (0x8086 - 0x159b) 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:04:25.471 Found net devices under 0000:86:00.0: cvl_0_0 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:04:25.471 Found net devices under 0000:86:00.1: cvl_0_1 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:25.471 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:04:25.472 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:25.472 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:25.472 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:25.472 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:25.472 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:25.472 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:25.472 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:25.472 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:25.472 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:25.472 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:25.472 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:25.472 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:25.472 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:25.472 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:25.472 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:25.472 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:25.472 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:25.472 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:25.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:25.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:04:25.731 00:04:25.731 --- 10.0.0.2 ping statistics --- 00:04:25.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:25.731 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:25.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:25.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:04:25.731 00:04:25.731 --- 10.0.0.1 ping statistics --- 00:04:25.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:25.731 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3403117 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3403117 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3403117 ']' 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.731 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:25.732 [2024-12-09 04:59:02.371712] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:04:25.732 [2024-12-09 04:59:02.371757] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:25.991 [2024-12-09 04:59:02.441026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:25.991 [2024-12-09 04:59:02.484566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:25.991 [2024-12-09 04:59:02.484608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:25.991 [2024-12-09 04:59:02.484616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:25.991 [2024-12-09 04:59:02.484622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:25.991 [2024-12-09 04:59:02.484628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:25.991 [2024-12-09 04:59:02.485959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:25.991 [2024-12-09 04:59:02.486043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:25.991 [2024-12-09 04:59:02.486044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.991 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.991 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:04:25.991 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:25.991 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:25.991 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:25.991 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:25.991 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:25.991 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.991 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:26.250 [2024-12-09 04:59:02.636681] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:26.250 Malloc0 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:26.250 Delay0 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:26.250 [2024-12-09 04:59:02.707781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.250 04:59:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:26.250 [2024-12-09 04:59:02.824312] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:04:28.784 Initializing NVMe Controllers 00:04:28.784 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:04:28.784 controller IO queue size 128 less than required 00:04:28.784 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:04:28.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:04:28.784 Initialization complete. Launching workers. 00:04:28.784 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 36176 00:04:28.784 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36241, failed to submit 62 00:04:28.784 success 36180, unsuccessful 61, failed 0 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:04:28.784 rmmod nvme_tcp 00:04:28.784 rmmod nvme_fabrics 00:04:28.784 rmmod nvme_keyring 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3403117 ']' 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3403117 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3403117 ']' 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3403117 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3403117 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3403117' 00:04:28.784 killing process with pid 3403117 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3403117 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3403117 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:28.784 04:59:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:04:31.320 00:04:31.320 real 0m10.850s 00:04:31.320 user 0m11.906s 00:04:31.320 sys 0m5.032s 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:31.320 ************************************ 00:04:31.320 END TEST nvmf_abort 00:04:31.320 ************************************ 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:31.320 ************************************ 00:04:31.320 START TEST nvmf_ns_hotplug_stress 00:04:31.320 ************************************ 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:31.320 * Looking for test storage... 00:04:31.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:31.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.320 --rc genhtml_branch_coverage=1 00:04:31.320 --rc genhtml_function_coverage=1 00:04:31.320 --rc genhtml_legend=1 00:04:31.320 --rc geninfo_all_blocks=1 00:04:31.320 --rc geninfo_unexecuted_blocks=1 00:04:31.320 00:04:31.320 ' 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:31.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.320 --rc genhtml_branch_coverage=1 00:04:31.320 --rc genhtml_function_coverage=1 00:04:31.320 --rc genhtml_legend=1 00:04:31.320 --rc geninfo_all_blocks=1 00:04:31.320 --rc geninfo_unexecuted_blocks=1 00:04:31.320 00:04:31.320 ' 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:31.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.320 --rc genhtml_branch_coverage=1 00:04:31.320 --rc genhtml_function_coverage=1 00:04:31.320 --rc genhtml_legend=1 00:04:31.320 --rc geninfo_all_blocks=1 00:04:31.320 --rc geninfo_unexecuted_blocks=1 00:04:31.320 00:04:31.320 ' 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:31.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.320 --rc genhtml_branch_coverage=1 00:04:31.320 --rc genhtml_function_coverage=1 00:04:31.320 --rc genhtml_legend=1 00:04:31.320 --rc geninfo_all_blocks=1 00:04:31.320 --rc geninfo_unexecuted_blocks=1 00:04:31.320 00:04:31.320 ' 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:31.320 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:31.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:04:31.321 04:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:04:36.597 Found 0000:86:00.0 (0x8086 - 0x159b) 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:04:36.597 Found 0000:86:00.1 (0x8086 - 0x159b) 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:36.597 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:04:36.598 Found net devices under 0000:86:00.0: cvl_0_0 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:04:36.598 Found net devices under 0000:86:00.1: cvl_0_1 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:36.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:36.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:04:36.598 00:04:36.598 --- 10.0.0.2 ping statistics --- 00:04:36.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:36.598 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:36.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:36.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:04:36.598 00:04:36.598 --- 10.0.0.1 ping statistics --- 00:04:36.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:36.598 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3406930 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3406930 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3406930 ']' 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.598 04:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:36.598 [2024-12-09 04:59:13.017707] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:04:36.598 [2024-12-09 04:59:13.017759] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:36.598 [2024-12-09 04:59:13.087612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:36.598 [2024-12-09 04:59:13.129924] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:36.598 [2024-12-09 04:59:13.129961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:36.598 [2024-12-09 04:59:13.129968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:36.598 [2024-12-09 04:59:13.129974] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:36.598 [2024-12-09 04:59:13.129979] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:36.598 [2024-12-09 04:59:13.131429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:36.598 [2024-12-09 04:59:13.131517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:36.599 [2024-12-09 04:59:13.131519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.599 04:59:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.599 04:59:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:04:36.599 04:59:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:36.599 04:59:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.599 04:59:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:36.857 04:59:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:36.857 04:59:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:04:36.857 04:59:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:04:36.857 [2024-12-09 04:59:13.442499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:36.857 04:59:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:04:37.126 04:59:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:04:37.441 [2024-12-09 04:59:13.843935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:37.441 04:59:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:37.441 04:59:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:04:37.699 Malloc0 00:04:37.699 04:59:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:37.957 Delay0 00:04:37.957 04:59:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:38.216 04:59:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:04:38.474 NULL1 00:04:38.474 04:59:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:04:38.474 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3407410 00:04:38.474 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:04:38.474 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:04:38.474 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:38.730 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:38.988 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:04:38.988 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:04:39.245 true 00:04:39.245 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:04:39.245 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:39.502 04:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:39.502 04:59:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:04:39.502 04:59:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:04:39.761 true 00:04:39.761 04:59:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:04:39.761 04:59:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:41.139 Read completed with error (sct=0, sc=11) 00:04:41.139 04:59:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:41.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:41.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:41.139 04:59:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:04:41.139 04:59:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:04:41.139 true 00:04:41.398 04:59:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:04:41.398 04:59:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:41.398 04:59:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:41.657 04:59:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:04:41.657 04:59:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:04:41.938 true 00:04:41.938 04:59:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:04:41.938 04:59:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:43.317 04:59:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:43.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:43.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:43.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:43.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:43.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:43.317 04:59:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:04:43.317 04:59:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:04:43.317 true 00:04:43.317 04:59:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:04:43.317 04:59:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:44.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:44.254 04:59:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:44.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:44.513 04:59:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:04:44.513 04:59:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:04:44.513 true 00:04:44.772 04:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:04:44.772 04:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:44.772 04:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:45.031 04:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:04:45.031 04:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:04:45.290 true 00:04:45.290 04:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:04:45.290 04:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:46.737 04:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:46.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:46.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:46.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:46.737 04:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:04:46.737 04:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:04:46.737 true 00:04:46.737 04:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:04:46.737 04:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:47.022 04:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:47.280 04:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:04:47.280 04:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:04:47.280 true 00:04:47.280 04:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:04:47.280 04:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:48.662 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:48.662 04:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:48.662 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:48.662 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:48.662 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:48.662 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:48.662 04:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:04:48.662 04:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:04:48.921 true 00:04:48.921 04:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:04:48.921 04:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:49.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:49.856 04:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:49.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:49.856 04:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:04:49.856 04:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:04:50.114 true 00:04:50.114 04:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:04:50.114 04:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:50.373 04:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:50.632 04:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:04:50.632 04:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:04:50.632 true 00:04:50.891 04:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:04:50.891 04:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:51.826 04:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:51.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:51.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:52.084 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:52.084 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:52.084 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:52.084 04:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:04:52.084 04:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:04:52.342 true 00:04:52.342 04:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:04:52.342 04:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:53.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:53.276 04:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:53.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:53.276 04:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:04:53.276 04:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:04:53.534 true 00:04:53.534 04:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:04:53.534 04:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:53.793 04:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:53.793 04:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:04:53.793 04:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:04:54.051 true 00:04:54.051 04:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:04:54.051 04:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:55.450 04:59:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:55.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:55.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:55.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:55.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:55.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:55.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:55.451 04:59:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:04:55.451 04:59:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:04:55.708 true 00:04:55.708 04:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:04:55.708 04:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:56.642 04:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:56.643 04:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:04:56.643 04:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:04:56.900 true 00:04:56.900 04:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:04:56.900 04:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:57.157 04:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:57.414 04:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:04:57.414 04:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:04:57.414 true 00:04:57.414 04:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:04:57.414 04:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:58.788 04:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:58.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:58.788 04:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:04:58.788 04:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:04:59.070 true 00:04:59.070 04:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:04:59.070 04:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:59.070 04:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:59.329 04:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:04:59.329 04:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:04:59.587 true 00:04:59.587 04:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:04:59.587 04:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:00.964 04:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:00.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:00.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:00.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:00.964 04:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:00.964 04:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:00.964 true 00:05:00.964 04:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:05:00.964 04:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:01.223 04:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:01.481 04:59:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:01.481 04:59:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:01.739 true 00:05:01.739 04:59:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:05:01.739 04:59:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:02.673 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.673 04:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:02.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.931 04:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:02.931 04:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:03.188 true 00:05:03.188 04:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:05:03.188 04:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:04.121 04:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:04.121 04:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:04.121 04:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:04.379 true 00:05:04.379 04:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:05:04.379 04:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:04.637 04:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:04.914 04:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:04.914 04:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:04.914 true 00:05:04.914 04:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:05:04.914 04:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:06.290 04:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:06.290 04:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:06.290 04:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:06.549 true 00:05:06.549 04:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:05:06.549 04:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:06.549 04:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:06.808 04:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:06.808 04:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:07.067 true 00:05:07.067 04:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:05:07.067 04:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:08.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.003 04:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:08.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.261 04:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:08.261 04:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:08.519 true 00:05:08.519 04:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:05:08.519 04:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.452 04:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:09.452 Initializing NVMe Controllers 00:05:09.452 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:09.452 Controller IO queue size 128, less than required. 00:05:09.452 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:09.452 Controller IO queue size 128, less than required. 00:05:09.452 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:09.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:09.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:09.452 Initialization complete. Launching workers. 00:05:09.452 ======================================================== 00:05:09.452 Latency(us) 00:05:09.452 Device Information : IOPS MiB/s Average min max 00:05:09.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1679.16 0.82 49295.65 3045.10 1012923.28 00:05:09.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16005.83 7.82 7976.89 2723.52 380953.02 00:05:09.452 ======================================================== 00:05:09.453 Total : 17684.98 8.64 11900.03 2723.52 1012923.28 00:05:09.453 00:05:09.453 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:09.453 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:09.711 true 00:05:09.711 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3407410 00:05:09.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3407410) - No such process 00:05:09.711 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3407410 00:05:09.711 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.970 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:10.228 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:10.228 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:10.228 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:10.228 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:10.228 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:10.228 null0 00:05:10.486 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:10.486 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:10.486 04:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:10.486 null1 00:05:10.486 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:10.486 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:10.486 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:10.743 null2 00:05:10.743 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:10.743 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:10.743 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:11.001 null3 00:05:11.001 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:11.001 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:11.001 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:11.260 null4 00:05:11.260 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:11.260 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:11.260 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:11.260 null5 00:05:11.260 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:11.260 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:11.260 04:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:11.518 null6 00:05:11.518 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:11.518 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:11.518 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:11.777 null7 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:11.777 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3413023 3413024 3413026 3413028 3413030 3413032 3413034 3413036 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:11.778 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:12.036 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:12.036 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:12.036 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.036 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:12.036 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:12.036 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:12.036 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:12.036 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:12.295 04:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:12.554 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:12.813 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:12.813 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:12.813 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:12.813 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:12.813 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:12.813 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.813 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:12.813 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:13.071 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.071 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.071 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:13.071 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.071 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.071 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:13.071 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.071 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.071 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:13.071 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.071 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.071 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:13.071 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.071 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.071 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:13.071 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.071 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.072 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:13.072 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.072 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.072 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:13.072 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.072 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.072 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:13.330 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:13.330 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.331 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:13.331 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:13.331 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:13.331 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:13.331 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:13.331 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:13.331 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.331 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.331 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:13.331 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.331 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.331 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:13.589 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.589 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.589 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:13.589 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.589 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.589 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:13.589 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.589 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.589 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:13.589 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.589 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.589 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.589 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.589 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:13.589 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:13.589 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.589 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.589 04:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:13.589 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:13.589 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.589 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:13.589 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:13.589 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:13.589 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:13.589 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:13.589 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:13.848 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:14.107 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:14.107 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.107 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:14.107 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:14.107 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:14.107 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:14.107 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:14.107 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.367 04:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:14.367 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:14.626 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:14.627 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:14.627 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:14.885 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:14.885 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:14.885 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:14.885 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:14.885 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:14.885 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:14.885 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.885 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.145 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:15.404 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:15.404 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:15.404 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:15.404 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.404 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:15.404 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:15.404 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:15.404 04:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:15.663 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:15.922 rmmod nvme_tcp 00:05:15.922 rmmod nvme_fabrics 00:05:15.922 rmmod nvme_keyring 00:05:15.922 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:16.181 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:16.181 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:16.181 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3406930 ']' 00:05:16.181 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3406930 00:05:16.181 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3406930 ']' 00:05:16.181 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3406930 00:05:16.181 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:16.181 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.181 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3406930 00:05:16.181 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:16.181 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:16.181 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3406930' 00:05:16.181 killing process with pid 3406930 00:05:16.181 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3406930 00:05:16.181 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3406930 00:05:16.441 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:16.441 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:16.441 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:16.441 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:16.441 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:16.441 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:16.441 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:16.441 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:16.441 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:16.441 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:16.441 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:16.441 04:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:18.343 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:18.343 00:05:18.343 real 0m47.391s 00:05:18.343 user 3m15.720s 00:05:18.343 sys 0m14.827s 00:05:18.343 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.343 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:18.343 ************************************ 00:05:18.343 END TEST nvmf_ns_hotplug_stress 00:05:18.343 ************************************ 00:05:18.343 04:59:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:18.343 04:59:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:18.343 04:59:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.343 04:59:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:18.343 ************************************ 00:05:18.343 START TEST nvmf_delete_subsystem 00:05:18.343 ************************************ 00:05:18.343 04:59:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:18.602 * Looking for test storage... 00:05:18.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.602 --rc genhtml_branch_coverage=1 00:05:18.602 --rc genhtml_function_coverage=1 00:05:18.602 --rc genhtml_legend=1 00:05:18.602 --rc geninfo_all_blocks=1 00:05:18.602 --rc geninfo_unexecuted_blocks=1 00:05:18.602 00:05:18.602 ' 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.602 --rc genhtml_branch_coverage=1 00:05:18.602 --rc genhtml_function_coverage=1 00:05:18.602 --rc genhtml_legend=1 00:05:18.602 --rc geninfo_all_blocks=1 00:05:18.602 --rc geninfo_unexecuted_blocks=1 00:05:18.602 00:05:18.602 ' 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.602 --rc genhtml_branch_coverage=1 00:05:18.602 --rc genhtml_function_coverage=1 00:05:18.602 --rc genhtml_legend=1 00:05:18.602 --rc geninfo_all_blocks=1 00:05:18.602 --rc geninfo_unexecuted_blocks=1 00:05:18.602 00:05:18.602 ' 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.602 --rc genhtml_branch_coverage=1 00:05:18.602 --rc genhtml_function_coverage=1 00:05:18.602 --rc genhtml_legend=1 00:05:18.602 --rc geninfo_all_blocks=1 00:05:18.602 --rc geninfo_unexecuted_blocks=1 00:05:18.602 00:05:18.602 ' 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:18.602 04:59:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:23.870 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:23.870 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:23.870 Found net devices under 0000:86:00.0: cvl_0_0 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:23.870 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:23.871 Found net devices under 0000:86:00.1: cvl_0_1 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:23.871 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:24.129 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:24.129 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:24.129 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:24.129 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:24.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:24.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:05:24.129 00:05:24.129 --- 10.0.0.2 ping statistics --- 00:05:24.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:24.129 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:05:24.129 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:24.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:24.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:05:24.129 00:05:24.129 --- 10.0.0.1 ping statistics --- 00:05:24.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:24.130 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3417450 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3417450 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3417450 ']' 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.130 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:24.130 [2024-12-09 05:00:00.678179] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:05:24.130 [2024-12-09 05:00:00.678230] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:24.130 [2024-12-09 05:00:00.748537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.413 [2024-12-09 05:00:00.790270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:24.413 [2024-12-09 05:00:00.790307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:24.413 [2024-12-09 05:00:00.790314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:24.413 [2024-12-09 05:00:00.790320] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:24.413 [2024-12-09 05:00:00.790326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:24.413 [2024-12-09 05:00:00.791535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.413 [2024-12-09 05:00:00.791539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:24.413 [2024-12-09 05:00:00.930193] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:24.413 [2024-12-09 05:00:00.950418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:24.413 NULL1 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:24.413 Delay0 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3417514 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:24.413 05:00:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:24.413 [2024-12-09 05:00:01.052176] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:26.941 05:00:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:26.941 05:00:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.941 05:00:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 starting I/O failed: -6 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 starting I/O failed: -6 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 starting I/O failed: -6 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 starting I/O failed: -6 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 starting I/O failed: -6 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 starting I/O failed: -6 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 starting I/O failed: -6 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 starting I/O failed: -6 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 starting I/O failed: -6 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 starting I/O failed: -6 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 starting I/O failed: -6 00:05:26.941 [2024-12-09 05:00:03.173611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f381c00d4d0 is same with the state(6) to be set 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 starting I/O failed: -6 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 starting I/O failed: -6 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 starting I/O failed: -6 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 starting I/O failed: -6 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 starting I/O failed: -6 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 starting I/O failed: -6 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 starting I/O failed: -6 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 starting I/O failed: -6 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 starting I/O failed: -6 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Read completed with error (sct=0, sc=8) 00:05:26.941 Write completed with error (sct=0, sc=8) 00:05:26.941 starting I/O failed: -6 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 starting I/O failed: -6 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 [2024-12-09 05:00:03.174255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550860 is same with the state(6) to be set 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 [2024-12-09 05:00:03.174612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f381c000c40 is same with the state(6) to be set 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:26.942 Write completed with error (sct=0, sc=8) 00:05:26.942 Read completed with error (sct=0, sc=8) 00:05:27.514 [2024-12-09 05:00:04.147169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15519b0 is same with the state(6) to be set 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Write completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Write completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Write completed with error (sct=0, sc=8) 00:05:27.771 Write completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Write completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Write completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Write completed with error (sct=0, sc=8) 00:05:27.771 Write completed with error (sct=0, sc=8) 00:05:27.771 Write completed with error (sct=0, sc=8) 00:05:27.771 Write completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 [2024-12-09 05:00:04.175535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15502c0 is same with the state(6) to be set 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Write completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Write completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Write completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Write completed with error (sct=0, sc=8) 00:05:27.771 Write completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Write completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Write completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Write completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 Read completed with error (sct=0, sc=8) 00:05:27.771 [2024-12-09 05:00:04.175661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550680 is same with the state(6) to be set 00:05:27.771 Write completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Write completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Write completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Write completed with error (sct=0, sc=8) 00:05:27.772 [2024-12-09 05:00:04.177318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f381c00d800 is same with the state(6) to be set 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Write completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 Read completed with error (sct=0, sc=8) 00:05:27.772 [2024-12-09 05:00:04.177908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f381c00d020 is same with the state(6) to be set 00:05:27.772 Initializing NVMe Controllers 00:05:27.772 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:27.772 Controller IO queue size 128, less than required. 00:05:27.772 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:27.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:27.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:27.772 Initialization complete. Launching workers. 00:05:27.772 ======================================================== 00:05:27.772 Latency(us) 00:05:27.772 Device Information : IOPS MiB/s Average min max 00:05:27.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.68 0.08 894787.17 406.89 1012966.38 00:05:27.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.74 0.08 911959.06 382.27 1042791.84 00:05:27.772 ======================================================== 00:05:27.772 Total : 333.42 0.16 903168.69 382.27 1042791.84 00:05:27.772 00:05:27.772 [2024-12-09 05:00:04.178444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15519b0 (9): Bad file descriptor 00:05:27.772 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.772 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:05:27.772 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3417514 00:05:27.772 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:05:27.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3417514 00:05:28.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3417514) - No such process 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3417514 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3417514 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3417514 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:28.339 [2024-12-09 05:00:04.707131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3418253 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3418253 00:05:28.339 05:00:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:28.339 [2024-12-09 05:00:04.775151] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:28.598 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:28.598 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3418253 00:05:28.598 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:29.163 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:29.163 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3418253 00:05:29.163 05:00:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:29.729 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:29.729 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3418253 00:05:29.729 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:30.297 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:30.297 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3418253 00:05:30.297 05:00:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:30.862 05:00:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:30.862 05:00:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3418253 00:05:30.862 05:00:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:31.119 05:00:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:31.119 05:00:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3418253 00:05:31.119 05:00:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:31.417 Initializing NVMe Controllers 00:05:31.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:31.417 Controller IO queue size 128, less than required. 00:05:31.417 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:31.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:31.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:31.417 Initialization complete. Launching workers. 00:05:31.417 ======================================================== 00:05:31.417 Latency(us) 00:05:31.417 Device Information : IOPS MiB/s Average min max 00:05:31.417 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003585.37 1000168.07 1012030.73 00:05:31.417 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005215.31 1000330.32 1012308.89 00:05:31.417 ======================================================== 00:05:31.417 Total : 256.00 0.12 1004400.34 1000168.07 1012308.89 00:05:31.417 00:05:31.766 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:31.766 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3418253 00:05:31.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3418253) - No such process 00:05:31.766 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3418253 00:05:31.766 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:31.766 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:05:31.766 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:31.766 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:05:31.766 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:31.766 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:05:31.766 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:31.766 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:31.766 rmmod nvme_tcp 00:05:31.766 rmmod nvme_fabrics 00:05:31.766 rmmod nvme_keyring 00:05:31.766 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:31.766 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:05:31.766 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:05:31.766 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3417450 ']' 00:05:31.766 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3417450 00:05:31.766 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3417450 ']' 00:05:31.766 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3417450 00:05:31.766 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:05:31.766 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.766 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3417450 00:05:32.033 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.033 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.033 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3417450' 00:05:32.033 killing process with pid 3417450 00:05:32.033 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3417450 00:05:32.033 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3417450 00:05:32.033 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:32.033 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:32.033 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:32.033 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:05:32.033 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:05:32.033 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:05:32.033 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:32.033 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:32.033 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:32.033 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:32.033 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:32.033 05:00:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:34.562 00:05:34.562 real 0m15.678s 00:05:34.562 user 0m29.072s 00:05:34.562 sys 0m5.155s 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:34.562 ************************************ 00:05:34.562 END TEST nvmf_delete_subsystem 00:05:34.562 ************************************ 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:34.562 ************************************ 00:05:34.562 START TEST nvmf_host_management 00:05:34.562 ************************************ 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:05:34.562 * Looking for test storage... 00:05:34.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.562 --rc genhtml_branch_coverage=1 00:05:34.562 --rc genhtml_function_coverage=1 00:05:34.562 --rc genhtml_legend=1 00:05:34.562 --rc geninfo_all_blocks=1 00:05:34.562 --rc geninfo_unexecuted_blocks=1 00:05:34.562 00:05:34.562 ' 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.562 --rc genhtml_branch_coverage=1 00:05:34.562 --rc genhtml_function_coverage=1 00:05:34.562 --rc genhtml_legend=1 00:05:34.562 --rc geninfo_all_blocks=1 00:05:34.562 --rc geninfo_unexecuted_blocks=1 00:05:34.562 00:05:34.562 ' 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.562 --rc genhtml_branch_coverage=1 00:05:34.562 --rc genhtml_function_coverage=1 00:05:34.562 --rc genhtml_legend=1 00:05:34.562 --rc geninfo_all_blocks=1 00:05:34.562 --rc geninfo_unexecuted_blocks=1 00:05:34.562 00:05:34.562 ' 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.562 --rc genhtml_branch_coverage=1 00:05:34.562 --rc genhtml_function_coverage=1 00:05:34.562 --rc genhtml_legend=1 00:05:34.562 --rc geninfo_all_blocks=1 00:05:34.562 --rc geninfo_unexecuted_blocks=1 00:05:34.562 00:05:34.562 ' 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.562 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:34.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:05:34.563 05:00:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:39.830 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:39.830 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:39.830 Found net devices under 0000:86:00.0: cvl_0_0 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:39.830 Found net devices under 0000:86:00.1: cvl_0_1 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:39.830 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:39.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:39.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:05:39.831 00:05:39.831 --- 10.0.0.2 ping statistics --- 00:05:39.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:39.831 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:39.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:39.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:05:39.831 00:05:39.831 --- 10.0.0.1 ping statistics --- 00:05:39.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:39.831 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3422845 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3422845 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3422845 ']' 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.831 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:40.097 [2024-12-09 05:00:16.515688] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:05:40.097 [2024-12-09 05:00:16.515736] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:40.097 [2024-12-09 05:00:16.582950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:40.097 [2024-12-09 05:00:16.624790] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:40.097 [2024-12-09 05:00:16.624835] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:40.097 [2024-12-09 05:00:16.624843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:40.097 [2024-12-09 05:00:16.624848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:40.097 [2024-12-09 05:00:16.624853] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:40.097 [2024-12-09 05:00:16.626526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.097 [2024-12-09 05:00:16.626634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:40.097 [2024-12-09 05:00:16.626723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:40.097 [2024-12-09 05:00:16.626731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.097 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.097 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:05:40.097 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:40.097 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:40.097 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:40.355 [2024-12-09 05:00:16.773961] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:40.355 Malloc0 00:05:40.355 [2024-12-09 05:00:16.849769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3422917 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3422917 /var/tmp/bdevperf.sock 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3422917 ']' 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:05:40.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:05:40.355 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:05:40.355 { 00:05:40.355 "params": { 00:05:40.355 "name": "Nvme$subsystem", 00:05:40.355 "trtype": "$TEST_TRANSPORT", 00:05:40.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:05:40.355 "adrfam": "ipv4", 00:05:40.355 "trsvcid": "$NVMF_PORT", 00:05:40.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:05:40.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:05:40.355 "hdgst": ${hdgst:-false}, 00:05:40.356 "ddgst": ${ddgst:-false} 00:05:40.356 }, 00:05:40.356 "method": "bdev_nvme_attach_controller" 00:05:40.356 } 00:05:40.356 EOF 00:05:40.356 )") 00:05:40.356 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:05:40.356 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:05:40.356 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:05:40.356 05:00:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:05:40.356 "params": { 00:05:40.356 "name": "Nvme0", 00:05:40.356 "trtype": "tcp", 00:05:40.356 "traddr": "10.0.0.2", 00:05:40.356 "adrfam": "ipv4", 00:05:40.356 "trsvcid": "4420", 00:05:40.356 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:05:40.356 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:05:40.356 "hdgst": false, 00:05:40.356 "ddgst": false 00:05:40.356 }, 00:05:40.356 "method": "bdev_nvme_attach_controller" 00:05:40.356 }' 00:05:40.356 [2024-12-09 05:00:16.945994] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:05:40.356 [2024-12-09 05:00:16.946043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422917 ] 00:05:40.612 [2024-12-09 05:00:17.012035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.612 [2024-12-09 05:00:17.053377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.612 Running I/O for 10 seconds... 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:05:40.869 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:05:41.127 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:05:41.127 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:05:41.127 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:05:41.127 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:05:41.127 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.127 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:41.127 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.127 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:05:41.127 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:05:41.127 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:05:41.127 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:05:41.127 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:05:41.127 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:05:41.127 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.127 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:41.127 [2024-12-09 05:00:17.664449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf350b0 is same with the state(6) to be set 00:05:41.127 [2024-12-09 05:00:17.664525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf350b0 is same with the state(6) to be set 00:05:41.127 [2024-12-09 05:00:17.664534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf350b0 is same with the state(6) to be set 00:05:41.127 [2024-12-09 05:00:17.664540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf350b0 is same with the state(6) to be set 00:05:41.127 [2024-12-09 05:00:17.664547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf350b0 is same with the state(6) to be set 00:05:41.127 [2024-12-09 05:00:17.664553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf350b0 is same with the state(6) to be set 00:05:41.127 [2024-12-09 05:00:17.664559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf350b0 is same with the state(6) to be set 00:05:41.127 [2024-12-09 05:00:17.664565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf350b0 is same with the state(6) to be set 00:05:41.127 [2024-12-09 05:00:17.664571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf350b0 is same with the state(6) to be set 00:05:41.127 [2024-12-09 05:00:17.664577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf350b0 is same with the state(6) to be set 00:05:41.127 [2024-12-09 05:00:17.664583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf350b0 is same with the state(6) to be set 00:05:41.127 [2024-12-09 05:00:17.664594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf350b0 is same with the state(6) to be set 00:05:41.127 [2024-12-09 05:00:17.664600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf350b0 is same with the state(6) to be set 00:05:41.127 [2024-12-09 05:00:17.664606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf350b0 is same with the state(6) to be set 00:05:41.127 [2024-12-09 05:00:17.664612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf350b0 is same with the state(6) to be set 00:05:41.127 [2024-12-09 05:00:17.664618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf350b0 is same with the state(6) to be set 00:05:41.127 [2024-12-09 05:00:17.664623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf350b0 is same with the state(6) to be set 00:05:41.127 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.127 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:05:41.127 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.127 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:41.127 [2024-12-09 05:00:17.674148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:05:41.127 [2024-12-09 05:00:17.674184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.674194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:05:41.127 [2024-12-09 05:00:17.674202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.674210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:05:41.127 [2024-12-09 05:00:17.674217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.674225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:05:41.127 [2024-12-09 05:00:17.674232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.674239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a5510 is same with the state(6) to be set 00:05:41.127 [2024-12-09 05:00:17.674976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.674994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.127 [2024-12-09 05:00:17.675456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.127 [2024-12-09 05:00:17.675464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.675981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:41.128 [2024-12-09 05:00:17.675987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:41.128 [2024-12-09 05:00:17.676936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:05:41.128 task offset: 98304 on job bdev=Nvme0n1 fails 00:05:41.128 00:05:41.128 Latency(us) 00:05:41.128 [2024-12-09T04:00:17.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:41.128 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:05:41.128 Job: Nvme0n1 ended in about 0.42 seconds with error 00:05:41.128 Verification LBA range: start 0x0 length 0x400 00:05:41.128 Nvme0n1 : 0.42 1834.78 114.67 152.90 0.00 31353.35 1424.70 27354.16 00:05:41.128 [2024-12-09T04:00:17.774Z] =================================================================================================================== 00:05:41.128 [2024-12-09T04:00:17.774Z] Total : 1834.78 114.67 152.90 0.00 31353.35 1424.70 27354.16 00:05:41.128 [2024-12-09 05:00:17.679340] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.128 [2024-12-09 05:00:17.679362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a5510 (9): Bad file descriptor 00:05:41.128 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.128 05:00:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:05:41.128 [2024-12-09 05:00:17.690014] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:05:42.059 05:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3422917 00:05:42.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3422917) - No such process 00:05:42.059 05:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:05:42.059 05:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:05:42.059 05:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:05:42.059 05:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:05:42.059 05:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:05:42.059 05:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:05:42.059 05:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:05:42.059 05:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:05:42.059 { 00:05:42.059 "params": { 00:05:42.059 "name": "Nvme$subsystem", 00:05:42.059 "trtype": "$TEST_TRANSPORT", 00:05:42.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:05:42.059 "adrfam": "ipv4", 00:05:42.059 "trsvcid": "$NVMF_PORT", 00:05:42.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:05:42.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:05:42.059 "hdgst": ${hdgst:-false}, 00:05:42.059 "ddgst": ${ddgst:-false} 00:05:42.059 }, 00:05:42.059 "method": "bdev_nvme_attach_controller" 00:05:42.059 } 00:05:42.059 EOF 00:05:42.059 )") 00:05:42.059 05:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:05:42.059 05:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:05:42.059 05:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:05:42.059 05:00:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:05:42.059 "params": { 00:05:42.059 "name": "Nvme0", 00:05:42.059 "trtype": "tcp", 00:05:42.059 "traddr": "10.0.0.2", 00:05:42.059 "adrfam": "ipv4", 00:05:42.059 "trsvcid": "4420", 00:05:42.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:05:42.059 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:05:42.059 "hdgst": false, 00:05:42.059 "ddgst": false 00:05:42.059 }, 00:05:42.059 "method": "bdev_nvme_attach_controller" 00:05:42.059 }' 00:05:42.316 [2024-12-09 05:00:18.739004] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:05:42.316 [2024-12-09 05:00:18.739052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423175 ] 00:05:42.316 [2024-12-09 05:00:18.804928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.316 [2024-12-09 05:00:18.844450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.573 Running I/O for 1 seconds... 00:05:43.947 1856.00 IOPS, 116.00 MiB/s 00:05:43.947 Latency(us) 00:05:43.947 [2024-12-09T04:00:20.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:43.947 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:05:43.947 Verification LBA range: start 0x0 length 0x400 00:05:43.947 Nvme0n1 : 1.00 1912.59 119.54 0.00 0.00 32940.51 7579.38 27582.11 00:05:43.947 [2024-12-09T04:00:20.593Z] =================================================================================================================== 00:05:43.948 [2024-12-09T04:00:20.594Z] Total : 1912.59 119.54 0.00 0.00 32940.51 7579.38 27582.11 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:43.948 rmmod nvme_tcp 00:05:43.948 rmmod nvme_fabrics 00:05:43.948 rmmod nvme_keyring 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3422845 ']' 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3422845 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3422845 ']' 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3422845 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3422845 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3422845' 00:05:43.948 killing process with pid 3422845 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3422845 00:05:43.948 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3422845 00:05:44.206 [2024-12-09 05:00:20.709941] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:05:44.206 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:44.206 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:44.206 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:44.206 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:05:44.206 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:44.206 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:05:44.206 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:05:44.206 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:44.206 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:44.206 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:44.206 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:44.206 05:00:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:46.738 05:00:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:46.738 05:00:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:05:46.738 00:05:46.738 real 0m12.097s 00:05:46.738 user 0m20.240s 00:05:46.738 sys 0m5.227s 00:05:46.738 05:00:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.738 05:00:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:46.738 ************************************ 00:05:46.738 END TEST nvmf_host_management 00:05:46.738 ************************************ 00:05:46.739 05:00:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:05:46.739 05:00:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:46.739 05:00:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.739 05:00:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:46.739 ************************************ 00:05:46.739 START TEST nvmf_lvol 00:05:46.739 ************************************ 00:05:46.739 05:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:05:46.739 * Looking for test storage... 00:05:46.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:46.739 05:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.739 05:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.739 05:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.739 --rc genhtml_branch_coverage=1 00:05:46.739 --rc genhtml_function_coverage=1 00:05:46.739 --rc genhtml_legend=1 00:05:46.739 --rc geninfo_all_blocks=1 00:05:46.739 --rc geninfo_unexecuted_blocks=1 00:05:46.739 00:05:46.739 ' 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.739 --rc genhtml_branch_coverage=1 00:05:46.739 --rc genhtml_function_coverage=1 00:05:46.739 --rc genhtml_legend=1 00:05:46.739 --rc geninfo_all_blocks=1 00:05:46.739 --rc geninfo_unexecuted_blocks=1 00:05:46.739 00:05:46.739 ' 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.739 --rc genhtml_branch_coverage=1 00:05:46.739 --rc genhtml_function_coverage=1 00:05:46.739 --rc genhtml_legend=1 00:05:46.739 --rc geninfo_all_blocks=1 00:05:46.739 --rc geninfo_unexecuted_blocks=1 00:05:46.739 00:05:46.739 ' 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.739 --rc genhtml_branch_coverage=1 00:05:46.739 --rc genhtml_function_coverage=1 00:05:46.739 --rc genhtml_legend=1 00:05:46.739 --rc geninfo_all_blocks=1 00:05:46.739 --rc geninfo_unexecuted_blocks=1 00:05:46.739 00:05:46.739 ' 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.739 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:46.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:05:46.740 05:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:52.006 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:52.006 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:52.006 Found net devices under 0000:86:00.0: cvl_0_0 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:52.006 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:52.007 Found net devices under 0000:86:00.1: cvl_0_1 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:52.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:52.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:05:52.007 00:05:52.007 --- 10.0.0.2 ping statistics --- 00:05:52.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:52.007 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:52.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:52.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:05:52.007 00:05:52.007 --- 10.0.0.1 ping statistics --- 00:05:52.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:52.007 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3426943 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3426943 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3426943 ']' 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.007 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:05:52.007 [2024-12-09 05:00:28.594133] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:05:52.007 [2024-12-09 05:00:28.594183] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:52.266 [2024-12-09 05:00:28.663404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.266 [2024-12-09 05:00:28.704471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:52.266 [2024-12-09 05:00:28.704511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:52.266 [2024-12-09 05:00:28.704518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:52.267 [2024-12-09 05:00:28.704529] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:52.267 [2024-12-09 05:00:28.704535] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:52.267 [2024-12-09 05:00:28.705823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.267 [2024-12-09 05:00:28.705923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.267 [2024-12-09 05:00:28.705926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.267 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.267 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:05:52.267 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:52.267 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:52.267 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:05:52.267 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:52.267 05:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:52.526 [2024-12-09 05:00:29.012546] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.526 05:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:05:52.785 05:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:05:52.785 05:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:05:53.043 05:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:05:53.043 05:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:05:53.043 05:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:05:53.302 05:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3975e727-e76e-4dcb-82b9-ebebeccab1a2 00:05:53.302 05:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3975e727-e76e-4dcb-82b9-ebebeccab1a2 lvol 20 00:05:53.560 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1f3987d0-0e6c-4f6d-b7cb-69cec5d78722 00:05:53.560 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:53.818 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1f3987d0-0e6c-4f6d-b7cb-69cec5d78722 00:05:54.076 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:54.076 [2024-12-09 05:00:30.653272] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:54.076 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:54.335 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3427433 00:05:54.335 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:05:54.335 05:00:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:05:55.267 05:00:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1f3987d0-0e6c-4f6d-b7cb-69cec5d78722 MY_SNAPSHOT 00:05:55.525 05:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5778a057-751f-4b8c-889a-dc6eaa89846d 00:05:55.525 05:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1f3987d0-0e6c-4f6d-b7cb-69cec5d78722 30 00:05:55.783 05:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5778a057-751f-4b8c-889a-dc6eaa89846d MY_CLONE 00:05:56.041 05:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3164c124-2ba3-4292-8a8a-d92716d209dc 00:05:56.041 05:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3164c124-2ba3-4292-8a8a-d92716d209dc 00:05:56.609 05:00:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3427433 00:06:04.713 Initializing NVMe Controllers 00:06:04.713 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:04.713 Controller IO queue size 128, less than required. 00:06:04.713 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:04.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:04.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:04.713 Initialization complete. Launching workers. 00:06:04.713 ======================================================== 00:06:04.714 Latency(us) 00:06:04.714 Device Information : IOPS MiB/s Average min max 00:06:04.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11755.90 45.92 10888.67 1730.30 62572.17 00:06:04.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11783.70 46.03 10861.66 3711.60 54358.35 00:06:04.714 ======================================================== 00:06:04.714 Total : 23539.60 91.95 10875.15 1730.30 62572.17 00:06:04.714 00:06:04.714 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:04.971 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1f3987d0-0e6c-4f6d-b7cb-69cec5d78722 00:06:05.229 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3975e727-e76e-4dcb-82b9-ebebeccab1a2 00:06:05.487 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:05.487 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:05.487 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:05.487 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:05.487 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:05.487 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:05.487 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:05.487 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:05.487 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:05.487 rmmod nvme_tcp 00:06:05.487 rmmod nvme_fabrics 00:06:05.487 rmmod nvme_keyring 00:06:05.487 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:05.487 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:05.487 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:05.487 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3426943 ']' 00:06:05.487 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3426943 00:06:05.487 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3426943 ']' 00:06:05.487 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3426943 00:06:05.487 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:05.487 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.487 05:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3426943 00:06:05.487 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.487 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.487 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3426943' 00:06:05.487 killing process with pid 3426943 00:06:05.487 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3426943 00:06:05.487 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3426943 00:06:05.745 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:05.745 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:05.745 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:05.745 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:05.745 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:05.745 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:05.745 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:05.745 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:05.745 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:05.745 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:05.745 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:05.745 05:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:08.280 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:08.280 00:06:08.280 real 0m21.460s 00:06:08.280 user 1m3.149s 00:06:08.280 sys 0m7.266s 00:06:08.280 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.280 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:08.280 ************************************ 00:06:08.280 END TEST nvmf_lvol 00:06:08.280 ************************************ 00:06:08.280 05:00:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:08.280 05:00:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:08.280 05:00:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:08.281 ************************************ 00:06:08.281 START TEST nvmf_lvs_grow 00:06:08.281 ************************************ 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:08.281 * Looking for test storage... 00:06:08.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:08.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.281 --rc genhtml_branch_coverage=1 00:06:08.281 --rc genhtml_function_coverage=1 00:06:08.281 --rc genhtml_legend=1 00:06:08.281 --rc geninfo_all_blocks=1 00:06:08.281 --rc geninfo_unexecuted_blocks=1 00:06:08.281 00:06:08.281 ' 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:08.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.281 --rc genhtml_branch_coverage=1 00:06:08.281 --rc genhtml_function_coverage=1 00:06:08.281 --rc genhtml_legend=1 00:06:08.281 --rc geninfo_all_blocks=1 00:06:08.281 --rc geninfo_unexecuted_blocks=1 00:06:08.281 00:06:08.281 ' 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:08.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.281 --rc genhtml_branch_coverage=1 00:06:08.281 --rc genhtml_function_coverage=1 00:06:08.281 --rc genhtml_legend=1 00:06:08.281 --rc geninfo_all_blocks=1 00:06:08.281 --rc geninfo_unexecuted_blocks=1 00:06:08.281 00:06:08.281 ' 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:08.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.281 --rc genhtml_branch_coverage=1 00:06:08.281 --rc genhtml_function_coverage=1 00:06:08.281 --rc genhtml_legend=1 00:06:08.281 --rc geninfo_all_blocks=1 00:06:08.281 --rc geninfo_unexecuted_blocks=1 00:06:08.281 00:06:08.281 ' 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.281 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:08.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:08.282 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:13.546 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:13.547 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:13.547 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:13.547 Found net devices under 0000:86:00.0: cvl_0_0 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:13.547 Found net devices under 0000:86:00.1: cvl_0_1 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:13.547 05:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:13.547 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:13.547 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:13.547 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:13.547 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:13.547 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:13.547 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:13.547 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:13.547 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:13.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:13.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:06:13.547 00:06:13.547 --- 10.0.0.2 ping statistics --- 00:06:13.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.547 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:06:13.547 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:13.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:13.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:06:13.547 00:06:13.547 --- 10.0.0.1 ping statistics --- 00:06:13.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.547 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3432817 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3432817 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3432817 ']' 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.806 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:13.806 [2024-12-09 05:00:50.288511] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:06:13.806 [2024-12-09 05:00:50.288556] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:13.806 [2024-12-09 05:00:50.356531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.806 [2024-12-09 05:00:50.397973] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:13.806 [2024-12-09 05:00:50.398013] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:13.807 [2024-12-09 05:00:50.398021] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:13.807 [2024-12-09 05:00:50.398028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:13.807 [2024-12-09 05:00:50.398033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:13.807 [2024-12-09 05:00:50.398588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.065 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.065 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:14.065 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:14.065 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:14.065 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:14.065 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:14.065 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:14.065 [2024-12-09 05:00:50.701117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.323 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:14.323 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.323 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.323 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:14.323 ************************************ 00:06:14.323 START TEST lvs_grow_clean 00:06:14.323 ************************************ 00:06:14.323 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:14.323 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:14.323 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:14.323 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:14.323 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:14.323 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:14.323 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:14.323 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:14.323 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:14.323 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:14.581 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:14.581 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:14.581 05:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f0a9822c-ac61-4a36-a5ac-079a768cd47c 00:06:14.581 05:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0a9822c-ac61-4a36-a5ac-079a768cd47c 00:06:14.581 05:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:14.838 05:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:14.838 05:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:14.838 05:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f0a9822c-ac61-4a36-a5ac-079a768cd47c lvol 150 00:06:15.096 05:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e1016373-d63a-4f6e-b56f-faeb2f9eeb8d 00:06:15.096 05:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:15.096 05:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:15.354 [2024-12-09 05:00:51.747494] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:15.354 [2024-12-09 05:00:51.747547] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:15.354 true 00:06:15.354 05:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0a9822c-ac61-4a36-a5ac-079a768cd47c 00:06:15.354 05:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:15.354 05:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:15.354 05:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:15.612 05:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e1016373-d63a-4f6e-b56f-faeb2f9eeb8d 00:06:15.870 05:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:15.870 [2024-12-09 05:00:52.505780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:16.128 05:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:16.128 05:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3433315 00:06:16.128 05:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:16.128 05:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:16.128 05:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3433315 /var/tmp/bdevperf.sock 00:06:16.128 05:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3433315 ']' 00:06:16.128 05:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:16.128 05:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.128 05:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:16.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:16.128 05:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.128 05:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:16.128 [2024-12-09 05:00:52.742275] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:06:16.128 [2024-12-09 05:00:52.742320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3433315 ] 00:06:16.386 [2024-12-09 05:00:52.806358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.386 [2024-12-09 05:00:52.847427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.386 05:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.386 05:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:16.386 05:00:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:16.643 Nvme0n1 00:06:16.643 05:00:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:16.900 [ 00:06:16.900 { 00:06:16.900 "name": "Nvme0n1", 00:06:16.900 "aliases": [ 00:06:16.900 "e1016373-d63a-4f6e-b56f-faeb2f9eeb8d" 00:06:16.900 ], 00:06:16.900 "product_name": "NVMe disk", 00:06:16.900 "block_size": 4096, 00:06:16.900 "num_blocks": 38912, 00:06:16.900 "uuid": "e1016373-d63a-4f6e-b56f-faeb2f9eeb8d", 00:06:16.900 "numa_id": 1, 00:06:16.900 "assigned_rate_limits": { 00:06:16.900 "rw_ios_per_sec": 0, 00:06:16.900 "rw_mbytes_per_sec": 0, 00:06:16.900 "r_mbytes_per_sec": 0, 00:06:16.900 "w_mbytes_per_sec": 0 00:06:16.900 }, 00:06:16.900 "claimed": false, 00:06:16.900 "zoned": false, 00:06:16.900 "supported_io_types": { 00:06:16.900 "read": true, 00:06:16.900 "write": true, 00:06:16.900 "unmap": true, 00:06:16.900 "flush": true, 00:06:16.900 "reset": true, 00:06:16.900 "nvme_admin": true, 00:06:16.900 "nvme_io": true, 00:06:16.900 "nvme_io_md": false, 00:06:16.900 "write_zeroes": true, 00:06:16.900 "zcopy": false, 00:06:16.900 "get_zone_info": false, 00:06:16.900 "zone_management": false, 00:06:16.900 "zone_append": false, 00:06:16.900 "compare": true, 00:06:16.900 "compare_and_write": true, 00:06:16.900 "abort": true, 00:06:16.900 "seek_hole": false, 00:06:16.900 "seek_data": false, 00:06:16.900 "copy": true, 00:06:16.900 "nvme_iov_md": false 00:06:16.901 }, 00:06:16.901 "memory_domains": [ 00:06:16.901 { 00:06:16.901 "dma_device_id": "system", 00:06:16.901 "dma_device_type": 1 00:06:16.901 } 00:06:16.901 ], 00:06:16.901 "driver_specific": { 00:06:16.901 "nvme": [ 00:06:16.901 { 00:06:16.901 "trid": { 00:06:16.901 "trtype": "TCP", 00:06:16.901 "adrfam": "IPv4", 00:06:16.901 "traddr": "10.0.0.2", 00:06:16.901 "trsvcid": "4420", 00:06:16.901 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:16.901 }, 00:06:16.901 "ctrlr_data": { 00:06:16.901 "cntlid": 1, 00:06:16.901 "vendor_id": "0x8086", 00:06:16.901 "model_number": "SPDK bdev Controller", 00:06:16.901 "serial_number": "SPDK0", 00:06:16.901 "firmware_revision": "25.01", 00:06:16.901 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:16.901 "oacs": { 00:06:16.901 "security": 0, 00:06:16.901 "format": 0, 00:06:16.901 "firmware": 0, 00:06:16.901 "ns_manage": 0 00:06:16.901 }, 00:06:16.901 "multi_ctrlr": true, 00:06:16.901 "ana_reporting": false 00:06:16.901 }, 00:06:16.901 "vs": { 00:06:16.901 "nvme_version": "1.3" 00:06:16.901 }, 00:06:16.901 "ns_data": { 00:06:16.901 "id": 1, 00:06:16.901 "can_share": true 00:06:16.901 } 00:06:16.901 } 00:06:16.901 ], 00:06:16.901 "mp_policy": "active_passive" 00:06:16.901 } 00:06:16.901 } 00:06:16.901 ] 00:06:16.901 05:00:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:16.901 05:00:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3433347 00:06:16.901 05:00:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:16.901 Running I/O for 10 seconds... 00:06:18.272 Latency(us) 00:06:18.272 [2024-12-09T04:00:54.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:18.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:18.272 Nvme0n1 : 1.00 22403.00 87.51 0.00 0.00 0.00 0.00 0.00 00:06:18.272 [2024-12-09T04:00:54.918Z] =================================================================================================================== 00:06:18.272 [2024-12-09T04:00:54.918Z] Total : 22403.00 87.51 0.00 0.00 0.00 0.00 0.00 00:06:18.272 00:06:18.837 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f0a9822c-ac61-4a36-a5ac-079a768cd47c 00:06:19.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:19.095 Nvme0n1 : 2.00 22545.00 88.07 0.00 0.00 0.00 0.00 0.00 00:06:19.095 [2024-12-09T04:00:55.741Z] =================================================================================================================== 00:06:19.095 [2024-12-09T04:00:55.741Z] Total : 22545.00 88.07 0.00 0.00 0.00 0.00 0.00 00:06:19.095 00:06:19.095 true 00:06:19.095 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0a9822c-ac61-4a36-a5ac-079a768cd47c 00:06:19.095 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:19.352 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:19.352 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:19.352 05:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3433347 00:06:19.926 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:19.926 Nvme0n1 : 3.00 22613.33 88.33 0.00 0.00 0.00 0.00 0.00 00:06:19.926 [2024-12-09T04:00:56.573Z] =================================================================================================================== 00:06:19.927 [2024-12-09T04:00:56.573Z] Total : 22613.33 88.33 0.00 0.00 0.00 0.00 0.00 00:06:19.927 00:06:21.300 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:21.301 Nvme0n1 : 4.00 22687.75 88.62 0.00 0.00 0.00 0.00 0.00 00:06:21.301 [2024-12-09T04:00:57.947Z] =================================================================================================================== 00:06:21.301 [2024-12-09T04:00:57.947Z] Total : 22687.75 88.62 0.00 0.00 0.00 0.00 0.00 00:06:21.301 00:06:22.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:22.233 Nvme0n1 : 5.00 22712.80 88.72 0.00 0.00 0.00 0.00 0.00 00:06:22.233 [2024-12-09T04:00:58.879Z] =================================================================================================================== 00:06:22.233 [2024-12-09T04:00:58.879Z] Total : 22712.80 88.72 0.00 0.00 0.00 0.00 0.00 00:06:22.233 00:06:23.165 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:23.165 Nvme0n1 : 6.00 22735.83 88.81 0.00 0.00 0.00 0.00 0.00 00:06:23.165 [2024-12-09T04:00:59.811Z] =================================================================================================================== 00:06:23.165 [2024-12-09T04:00:59.811Z] Total : 22735.83 88.81 0.00 0.00 0.00 0.00 0.00 00:06:23.165 00:06:24.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:24.195 Nvme0n1 : 7.00 22709.43 88.71 0.00 0.00 0.00 0.00 0.00 00:06:24.195 [2024-12-09T04:01:00.841Z] =================================================================================================================== 00:06:24.195 [2024-12-09T04:01:00.841Z] Total : 22709.43 88.71 0.00 0.00 0.00 0.00 0.00 00:06:24.195 00:06:25.124 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:25.124 Nvme0n1 : 8.00 22732.75 88.80 0.00 0.00 0.00 0.00 0.00 00:06:25.124 [2024-12-09T04:01:01.770Z] =================================================================================================================== 00:06:25.124 [2024-12-09T04:01:01.770Z] Total : 22732.75 88.80 0.00 0.00 0.00 0.00 0.00 00:06:25.124 00:06:26.054 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:26.054 Nvme0n1 : 9.00 22742.67 88.84 0.00 0.00 0.00 0.00 0.00 00:06:26.054 [2024-12-09T04:01:02.700Z] =================================================================================================================== 00:06:26.054 [2024-12-09T04:01:02.700Z] Total : 22742.67 88.84 0.00 0.00 0.00 0.00 0.00 00:06:26.054 00:06:26.986 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:26.986 Nvme0n1 : 10.00 22768.90 88.94 0.00 0.00 0.00 0.00 0.00 00:06:26.986 [2024-12-09T04:01:03.632Z] =================================================================================================================== 00:06:26.986 [2024-12-09T04:01:03.632Z] Total : 22768.90 88.94 0.00 0.00 0.00 0.00 0.00 00:06:26.986 00:06:26.986 00:06:26.986 Latency(us) 00:06:26.986 [2024-12-09T04:01:03.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:26.986 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:26.986 Nvme0n1 : 10.01 22769.96 88.95 0.00 0.00 5617.70 3376.53 11625.52 00:06:26.986 [2024-12-09T04:01:03.632Z] =================================================================================================================== 00:06:26.986 [2024-12-09T04:01:03.632Z] Total : 22769.96 88.95 0.00 0.00 5617.70 3376.53 11625.52 00:06:26.986 { 00:06:26.986 "results": [ 00:06:26.986 { 00:06:26.986 "job": "Nvme0n1", 00:06:26.986 "core_mask": "0x2", 00:06:26.986 "workload": "randwrite", 00:06:26.986 "status": "finished", 00:06:26.986 "queue_depth": 128, 00:06:26.986 "io_size": 4096, 00:06:26.986 "runtime": 10.005154, 00:06:26.986 "iops": 22769.964360368667, 00:06:26.986 "mibps": 88.9451732826901, 00:06:26.986 "io_failed": 0, 00:06:26.986 "io_timeout": 0, 00:06:26.986 "avg_latency_us": 5617.702838117016, 00:06:26.986 "min_latency_us": 3376.528695652174, 00:06:26.986 "max_latency_us": 11625.51652173913 00:06:26.986 } 00:06:26.986 ], 00:06:26.986 "core_count": 1 00:06:26.986 } 00:06:26.986 05:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3433315 00:06:26.986 05:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3433315 ']' 00:06:26.986 05:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3433315 00:06:26.986 05:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:06:26.986 05:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.986 05:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3433315 00:06:27.245 05:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:27.245 05:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:27.245 05:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3433315' 00:06:27.245 killing process with pid 3433315 00:06:27.245 05:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3433315 00:06:27.245 Received shutdown signal, test time was about 10.000000 seconds 00:06:27.245 00:06:27.245 Latency(us) 00:06:27.245 [2024-12-09T04:01:03.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:27.245 [2024-12-09T04:01:03.891Z] =================================================================================================================== 00:06:27.245 [2024-12-09T04:01:03.891Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:27.245 05:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3433315 00:06:27.245 05:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:27.503 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:27.760 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0a9822c-ac61-4a36-a5ac-079a768cd47c 00:06:27.760 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:27.760 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:27.760 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:06:27.760 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:28.018 [2024-12-09 05:01:04.574364] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:28.018 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0a9822c-ac61-4a36-a5ac-079a768cd47c 00:06:28.018 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:06:28.018 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0a9822c-ac61-4a36-a5ac-079a768cd47c 00:06:28.018 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:28.018 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.018 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:28.018 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.018 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:28.018 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.018 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:28.018 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:28.018 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0a9822c-ac61-4a36-a5ac-079a768cd47c 00:06:28.276 request: 00:06:28.276 { 00:06:28.276 "uuid": "f0a9822c-ac61-4a36-a5ac-079a768cd47c", 00:06:28.276 "method": "bdev_lvol_get_lvstores", 00:06:28.276 "req_id": 1 00:06:28.276 } 00:06:28.276 Got JSON-RPC error response 00:06:28.276 response: 00:06:28.276 { 00:06:28.276 "code": -19, 00:06:28.276 "message": "No such device" 00:06:28.276 } 00:06:28.276 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:06:28.276 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.276 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:28.276 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.276 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:28.534 aio_bdev 00:06:28.534 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e1016373-d63a-4f6e-b56f-faeb2f9eeb8d 00:06:28.534 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=e1016373-d63a-4f6e-b56f-faeb2f9eeb8d 00:06:28.534 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:28.534 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:06:28.534 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:28.534 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:28.534 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:28.534 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e1016373-d63a-4f6e-b56f-faeb2f9eeb8d -t 2000 00:06:28.792 [ 00:06:28.792 { 00:06:28.792 "name": "e1016373-d63a-4f6e-b56f-faeb2f9eeb8d", 00:06:28.792 "aliases": [ 00:06:28.792 "lvs/lvol" 00:06:28.792 ], 00:06:28.792 "product_name": "Logical Volume", 00:06:28.792 "block_size": 4096, 00:06:28.792 "num_blocks": 38912, 00:06:28.792 "uuid": "e1016373-d63a-4f6e-b56f-faeb2f9eeb8d", 00:06:28.792 "assigned_rate_limits": { 00:06:28.792 "rw_ios_per_sec": 0, 00:06:28.792 "rw_mbytes_per_sec": 0, 00:06:28.792 "r_mbytes_per_sec": 0, 00:06:28.792 "w_mbytes_per_sec": 0 00:06:28.792 }, 00:06:28.792 "claimed": false, 00:06:28.792 "zoned": false, 00:06:28.792 "supported_io_types": { 00:06:28.792 "read": true, 00:06:28.792 "write": true, 00:06:28.792 "unmap": true, 00:06:28.792 "flush": false, 00:06:28.792 "reset": true, 00:06:28.792 "nvme_admin": false, 00:06:28.792 "nvme_io": false, 00:06:28.792 "nvme_io_md": false, 00:06:28.792 "write_zeroes": true, 00:06:28.792 "zcopy": false, 00:06:28.792 "get_zone_info": false, 00:06:28.792 "zone_management": false, 00:06:28.792 "zone_append": false, 00:06:28.792 "compare": false, 00:06:28.792 "compare_and_write": false, 00:06:28.792 "abort": false, 00:06:28.792 "seek_hole": true, 00:06:28.792 "seek_data": true, 00:06:28.792 "copy": false, 00:06:28.792 "nvme_iov_md": false 00:06:28.792 }, 00:06:28.792 "driver_specific": { 00:06:28.792 "lvol": { 00:06:28.792 "lvol_store_uuid": "f0a9822c-ac61-4a36-a5ac-079a768cd47c", 00:06:28.792 "base_bdev": "aio_bdev", 00:06:28.792 "thin_provision": false, 00:06:28.792 "num_allocated_clusters": 38, 00:06:28.792 "snapshot": false, 00:06:28.792 "clone": false, 00:06:28.792 "esnap_clone": false 00:06:28.792 } 00:06:28.792 } 00:06:28.792 } 00:06:28.792 ] 00:06:28.792 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:06:28.792 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0a9822c-ac61-4a36-a5ac-079a768cd47c 00:06:28.792 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:06:29.050 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:06:29.050 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0a9822c-ac61-4a36-a5ac-079a768cd47c 00:06:29.050 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:06:29.307 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:06:29.307 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e1016373-d63a-4f6e-b56f-faeb2f9eeb8d 00:06:29.565 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f0a9822c-ac61-4a36-a5ac-079a768cd47c 00:06:29.565 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:29.823 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:29.823 00:06:29.823 real 0m15.654s 00:06:29.823 user 0m15.197s 00:06:29.823 sys 0m1.487s 00:06:29.823 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.823 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:29.823 ************************************ 00:06:29.823 END TEST lvs_grow_clean 00:06:29.823 ************************************ 00:06:29.823 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:06:29.823 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:29.823 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.823 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:29.823 ************************************ 00:06:29.823 START TEST lvs_grow_dirty 00:06:29.823 ************************************ 00:06:29.823 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:06:29.823 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:29.823 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:29.823 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:29.823 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:29.823 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:29.823 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:29.823 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:30.081 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:30.081 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:30.081 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:30.081 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:30.398 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=efda482c-9c9d-45b0-8149-551493a27741 00:06:30.398 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u efda482c-9c9d-45b0-8149-551493a27741 00:06:30.398 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:30.655 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:30.655 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:30.655 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u efda482c-9c9d-45b0-8149-551493a27741 lvol 150 00:06:30.655 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a9ba0d4c-3fd7-4417-baf1-fd89722f913f 00:06:30.655 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:30.655 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:30.913 [2024-12-09 05:01:07.446721] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:30.913 [2024-12-09 05:01:07.446772] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:30.913 true 00:06:30.913 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u efda482c-9c9d-45b0-8149-551493a27741 00:06:30.913 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:31.171 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:31.171 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:31.428 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a9ba0d4c-3fd7-4417-baf1-fd89722f913f 00:06:31.428 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:31.685 [2024-12-09 05:01:08.184926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:31.685 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:31.943 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3435922 00:06:31.944 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:31.944 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:31.944 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3435922 /var/tmp/bdevperf.sock 00:06:31.944 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3435922 ']' 00:06:31.944 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:31.944 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.944 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:31.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:31.944 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.944 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:31.944 [2024-12-09 05:01:08.429428] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:06:31.944 [2024-12-09 05:01:08.429473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3435922 ] 00:06:31.944 [2024-12-09 05:01:08.493223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.944 [2024-12-09 05:01:08.533994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.201 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.201 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:06:32.201 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:32.459 Nvme0n1 00:06:32.459 05:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:32.717 [ 00:06:32.717 { 00:06:32.717 "name": "Nvme0n1", 00:06:32.717 "aliases": [ 00:06:32.717 "a9ba0d4c-3fd7-4417-baf1-fd89722f913f" 00:06:32.717 ], 00:06:32.717 "product_name": "NVMe disk", 00:06:32.717 "block_size": 4096, 00:06:32.717 "num_blocks": 38912, 00:06:32.717 "uuid": "a9ba0d4c-3fd7-4417-baf1-fd89722f913f", 00:06:32.717 "numa_id": 1, 00:06:32.717 "assigned_rate_limits": { 00:06:32.717 "rw_ios_per_sec": 0, 00:06:32.717 "rw_mbytes_per_sec": 0, 00:06:32.717 "r_mbytes_per_sec": 0, 00:06:32.717 "w_mbytes_per_sec": 0 00:06:32.717 }, 00:06:32.717 "claimed": false, 00:06:32.717 "zoned": false, 00:06:32.717 "supported_io_types": { 00:06:32.717 "read": true, 00:06:32.717 "write": true, 00:06:32.717 "unmap": true, 00:06:32.717 "flush": true, 00:06:32.717 "reset": true, 00:06:32.717 "nvme_admin": true, 00:06:32.717 "nvme_io": true, 00:06:32.717 "nvme_io_md": false, 00:06:32.717 "write_zeroes": true, 00:06:32.717 "zcopy": false, 00:06:32.717 "get_zone_info": false, 00:06:32.717 "zone_management": false, 00:06:32.717 "zone_append": false, 00:06:32.717 "compare": true, 00:06:32.717 "compare_and_write": true, 00:06:32.717 "abort": true, 00:06:32.717 "seek_hole": false, 00:06:32.717 "seek_data": false, 00:06:32.717 "copy": true, 00:06:32.717 "nvme_iov_md": false 00:06:32.717 }, 00:06:32.717 "memory_domains": [ 00:06:32.717 { 00:06:32.717 "dma_device_id": "system", 00:06:32.717 "dma_device_type": 1 00:06:32.717 } 00:06:32.717 ], 00:06:32.717 "driver_specific": { 00:06:32.717 "nvme": [ 00:06:32.717 { 00:06:32.717 "trid": { 00:06:32.717 "trtype": "TCP", 00:06:32.717 "adrfam": "IPv4", 00:06:32.717 "traddr": "10.0.0.2", 00:06:32.717 "trsvcid": "4420", 00:06:32.717 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:32.717 }, 00:06:32.717 "ctrlr_data": { 00:06:32.717 "cntlid": 1, 00:06:32.717 "vendor_id": "0x8086", 00:06:32.717 "model_number": "SPDK bdev Controller", 00:06:32.717 "serial_number": "SPDK0", 00:06:32.717 "firmware_revision": "25.01", 00:06:32.717 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:32.717 "oacs": { 00:06:32.717 "security": 0, 00:06:32.717 "format": 0, 00:06:32.717 "firmware": 0, 00:06:32.717 "ns_manage": 0 00:06:32.717 }, 00:06:32.717 "multi_ctrlr": true, 00:06:32.717 "ana_reporting": false 00:06:32.717 }, 00:06:32.717 "vs": { 00:06:32.717 "nvme_version": "1.3" 00:06:32.717 }, 00:06:32.717 "ns_data": { 00:06:32.717 "id": 1, 00:06:32.717 "can_share": true 00:06:32.717 } 00:06:32.717 } 00:06:32.717 ], 00:06:32.717 "mp_policy": "active_passive" 00:06:32.717 } 00:06:32.717 } 00:06:32.717 ] 00:06:32.717 05:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3436149 00:06:32.717 05:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:32.717 05:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:32.717 Running I/O for 10 seconds... 00:06:34.090 Latency(us) 00:06:34.090 [2024-12-09T04:01:10.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:34.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:34.090 Nvme0n1 : 1.00 22238.00 86.87 0.00 0.00 0.00 0.00 0.00 00:06:34.090 [2024-12-09T04:01:10.736Z] =================================================================================================================== 00:06:34.090 [2024-12-09T04:01:10.736Z] Total : 22238.00 86.87 0.00 0.00 0.00 0.00 0.00 00:06:34.090 00:06:34.655 05:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u efda482c-9c9d-45b0-8149-551493a27741 00:06:34.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:34.912 Nvme0n1 : 2.00 22453.50 87.71 0.00 0.00 0.00 0.00 0.00 00:06:34.912 [2024-12-09T04:01:11.558Z] =================================================================================================================== 00:06:34.912 [2024-12-09T04:01:11.558Z] Total : 22453.50 87.71 0.00 0.00 0.00 0.00 0.00 00:06:34.912 00:06:34.912 true 00:06:34.912 05:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:34.912 05:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u efda482c-9c9d-45b0-8149-551493a27741 00:06:35.168 05:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:35.168 05:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:35.168 05:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3436149 00:06:35.731 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:35.731 Nvme0n1 : 3.00 22559.00 88.12 0.00 0.00 0.00 0.00 0.00 00:06:35.731 [2024-12-09T04:01:12.377Z] =================================================================================================================== 00:06:35.731 [2024-12-09T04:01:12.377Z] Total : 22559.00 88.12 0.00 0.00 0.00 0.00 0.00 00:06:35.731 00:06:37.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:37.102 Nvme0n1 : 4.00 22611.25 88.33 0.00 0.00 0.00 0.00 0.00 00:06:37.102 [2024-12-09T04:01:13.748Z] =================================================================================================================== 00:06:37.102 [2024-12-09T04:01:13.748Z] Total : 22611.25 88.33 0.00 0.00 0.00 0.00 0.00 00:06:37.102 00:06:38.033 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:38.033 Nvme0n1 : 5.00 22673.60 88.57 0.00 0.00 0.00 0.00 0.00 00:06:38.033 [2024-12-09T04:01:14.679Z] =================================================================================================================== 00:06:38.033 [2024-12-09T04:01:14.679Z] Total : 22673.60 88.57 0.00 0.00 0.00 0.00 0.00 00:06:38.033 00:06:38.969 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:38.969 Nvme0n1 : 6.00 22739.83 88.83 0.00 0.00 0.00 0.00 0.00 00:06:38.969 [2024-12-09T04:01:15.615Z] =================================================================================================================== 00:06:38.969 [2024-12-09T04:01:15.615Z] Total : 22739.83 88.83 0.00 0.00 0.00 0.00 0.00 00:06:38.969 00:06:39.902 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:39.902 Nvme0n1 : 7.00 22772.57 88.96 0.00 0.00 0.00 0.00 0.00 00:06:39.902 [2024-12-09T04:01:16.548Z] =================================================================================================================== 00:06:39.902 [2024-12-09T04:01:16.548Z] Total : 22772.57 88.96 0.00 0.00 0.00 0.00 0.00 00:06:39.902 00:06:40.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:40.834 Nvme0n1 : 8.00 22814.00 89.12 0.00 0.00 0.00 0.00 0.00 00:06:40.834 [2024-12-09T04:01:17.480Z] =================================================================================================================== 00:06:40.834 [2024-12-09T04:01:17.480Z] Total : 22814.00 89.12 0.00 0.00 0.00 0.00 0.00 00:06:40.834 00:06:41.767 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:41.767 Nvme0n1 : 9.00 22833.11 89.19 0.00 0.00 0.00 0.00 0.00 00:06:41.767 [2024-12-09T04:01:18.413Z] =================================================================================================================== 00:06:41.767 [2024-12-09T04:01:18.413Z] Total : 22833.11 89.19 0.00 0.00 0.00 0.00 0.00 00:06:41.767 00:06:42.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:42.712 Nvme0n1 : 10.00 22856.00 89.28 0.00 0.00 0.00 0.00 0.00 00:06:42.712 [2024-12-09T04:01:19.358Z] =================================================================================================================== 00:06:42.712 [2024-12-09T04:01:19.358Z] Total : 22856.00 89.28 0.00 0.00 0.00 0.00 0.00 00:06:42.712 00:06:42.712 00:06:42.712 Latency(us) 00:06:42.712 [2024-12-09T04:01:19.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:42.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:42.713 Nvme0n1 : 10.00 22859.36 89.29 0.00 0.00 5596.41 3248.31 11454.55 00:06:42.713 [2024-12-09T04:01:19.359Z] =================================================================================================================== 00:06:42.713 [2024-12-09T04:01:19.359Z] Total : 22859.36 89.29 0.00 0.00 5596.41 3248.31 11454.55 00:06:42.713 { 00:06:42.713 "results": [ 00:06:42.713 { 00:06:42.713 "job": "Nvme0n1", 00:06:42.713 "core_mask": "0x2", 00:06:42.713 "workload": "randwrite", 00:06:42.713 "status": "finished", 00:06:42.713 "queue_depth": 128, 00:06:42.713 "io_size": 4096, 00:06:42.713 "runtime": 10.00413, 00:06:42.713 "iops": 22859.35908469802, 00:06:42.713 "mibps": 89.29437142460164, 00:06:42.713 "io_failed": 0, 00:06:42.713 "io_timeout": 0, 00:06:42.713 "avg_latency_us": 5596.406126440733, 00:06:42.713 "min_latency_us": 3248.3060869565215, 00:06:42.713 "max_latency_us": 11454.553043478261 00:06:42.713 } 00:06:42.713 ], 00:06:42.713 "core_count": 1 00:06:42.713 } 00:06:42.970 05:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3435922 00:06:42.970 05:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3435922 ']' 00:06:42.970 05:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3435922 00:06:42.970 05:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:06:42.970 05:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.970 05:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3435922 00:06:42.970 05:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:42.970 05:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:42.970 05:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3435922' 00:06:42.970 killing process with pid 3435922 00:06:42.970 05:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3435922 00:06:42.970 Received shutdown signal, test time was about 10.000000 seconds 00:06:42.970 00:06:42.970 Latency(us) 00:06:42.970 [2024-12-09T04:01:19.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:42.970 [2024-12-09T04:01:19.616Z] =================================================================================================================== 00:06:42.970 [2024-12-09T04:01:19.616Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:42.970 05:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3435922 00:06:42.970 05:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:43.227 05:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:43.484 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u efda482c-9c9d-45b0-8149-551493a27741 00:06:43.484 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:43.743 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:43.743 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:06:43.743 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3432817 00:06:43.743 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3432817 00:06:43.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3432817 Killed "${NVMF_APP[@]}" "$@" 00:06:43.743 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:06:43.743 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:06:43.743 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:43.743 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.743 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:43.743 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3438000 00:06:43.743 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3438000 00:06:43.743 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:43.743 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3438000 ']' 00:06:43.743 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.743 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.743 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.743 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.743 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:43.743 [2024-12-09 05:01:20.301869] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:06:43.743 [2024-12-09 05:01:20.301918] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:43.743 [2024-12-09 05:01:20.371475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.001 [2024-12-09 05:01:20.413738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:44.001 [2024-12-09 05:01:20.413769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:44.001 [2024-12-09 05:01:20.413776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:44.001 [2024-12-09 05:01:20.413782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:44.001 [2024-12-09 05:01:20.413787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:44.001 [2024-12-09 05:01:20.414394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.001 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.001 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:06:44.001 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:44.001 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:44.001 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:44.001 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:44.001 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:44.259 [2024-12-09 05:01:20.718773] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:06:44.259 [2024-12-09 05:01:20.718863] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:06:44.259 [2024-12-09 05:01:20.718889] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:06:44.259 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:06:44.259 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a9ba0d4c-3fd7-4417-baf1-fd89722f913f 00:06:44.259 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a9ba0d4c-3fd7-4417-baf1-fd89722f913f 00:06:44.259 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:44.259 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:06:44.259 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:44.259 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:44.259 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:44.517 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a9ba0d4c-3fd7-4417-baf1-fd89722f913f -t 2000 00:06:44.517 [ 00:06:44.517 { 00:06:44.517 "name": "a9ba0d4c-3fd7-4417-baf1-fd89722f913f", 00:06:44.517 "aliases": [ 00:06:44.517 "lvs/lvol" 00:06:44.517 ], 00:06:44.517 "product_name": "Logical Volume", 00:06:44.517 "block_size": 4096, 00:06:44.517 "num_blocks": 38912, 00:06:44.517 "uuid": "a9ba0d4c-3fd7-4417-baf1-fd89722f913f", 00:06:44.517 "assigned_rate_limits": { 00:06:44.517 "rw_ios_per_sec": 0, 00:06:44.517 "rw_mbytes_per_sec": 0, 00:06:44.517 "r_mbytes_per_sec": 0, 00:06:44.517 "w_mbytes_per_sec": 0 00:06:44.517 }, 00:06:44.517 "claimed": false, 00:06:44.517 "zoned": false, 00:06:44.517 "supported_io_types": { 00:06:44.517 "read": true, 00:06:44.517 "write": true, 00:06:44.517 "unmap": true, 00:06:44.517 "flush": false, 00:06:44.517 "reset": true, 00:06:44.517 "nvme_admin": false, 00:06:44.517 "nvme_io": false, 00:06:44.517 "nvme_io_md": false, 00:06:44.517 "write_zeroes": true, 00:06:44.517 "zcopy": false, 00:06:44.517 "get_zone_info": false, 00:06:44.517 "zone_management": false, 00:06:44.517 "zone_append": false, 00:06:44.517 "compare": false, 00:06:44.517 "compare_and_write": false, 00:06:44.517 "abort": false, 00:06:44.517 "seek_hole": true, 00:06:44.517 "seek_data": true, 00:06:44.517 "copy": false, 00:06:44.517 "nvme_iov_md": false 00:06:44.517 }, 00:06:44.517 "driver_specific": { 00:06:44.517 "lvol": { 00:06:44.517 "lvol_store_uuid": "efda482c-9c9d-45b0-8149-551493a27741", 00:06:44.517 "base_bdev": "aio_bdev", 00:06:44.517 "thin_provision": false, 00:06:44.517 "num_allocated_clusters": 38, 00:06:44.517 "snapshot": false, 00:06:44.517 "clone": false, 00:06:44.517 "esnap_clone": false 00:06:44.517 } 00:06:44.517 } 00:06:44.517 } 00:06:44.517 ] 00:06:44.517 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:06:44.517 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u efda482c-9c9d-45b0-8149-551493a27741 00:06:44.517 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:06:44.775 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:06:44.775 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u efda482c-9c9d-45b0-8149-551493a27741 00:06:44.775 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:06:45.032 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:06:45.032 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:45.290 [2024-12-09 05:01:21.679600] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:45.290 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u efda482c-9c9d-45b0-8149-551493a27741 00:06:45.290 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:06:45.290 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u efda482c-9c9d-45b0-8149-551493a27741 00:06:45.290 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.290 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.290 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.290 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.290 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.290 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.290 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.290 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:45.290 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u efda482c-9c9d-45b0-8149-551493a27741 00:06:45.290 request: 00:06:45.290 { 00:06:45.290 "uuid": "efda482c-9c9d-45b0-8149-551493a27741", 00:06:45.290 "method": "bdev_lvol_get_lvstores", 00:06:45.290 "req_id": 1 00:06:45.290 } 00:06:45.290 Got JSON-RPC error response 00:06:45.290 response: 00:06:45.290 { 00:06:45.290 "code": -19, 00:06:45.290 "message": "No such device" 00:06:45.290 } 00:06:45.290 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:06:45.290 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:45.290 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:45.290 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:45.290 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:45.547 aio_bdev 00:06:45.547 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a9ba0d4c-3fd7-4417-baf1-fd89722f913f 00:06:45.547 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a9ba0d4c-3fd7-4417-baf1-fd89722f913f 00:06:45.547 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:45.548 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:06:45.548 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:45.548 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:45.548 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:45.805 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a9ba0d4c-3fd7-4417-baf1-fd89722f913f -t 2000 00:06:46.063 [ 00:06:46.063 { 00:06:46.063 "name": "a9ba0d4c-3fd7-4417-baf1-fd89722f913f", 00:06:46.063 "aliases": [ 00:06:46.063 "lvs/lvol" 00:06:46.063 ], 00:06:46.063 "product_name": "Logical Volume", 00:06:46.063 "block_size": 4096, 00:06:46.063 "num_blocks": 38912, 00:06:46.063 "uuid": "a9ba0d4c-3fd7-4417-baf1-fd89722f913f", 00:06:46.063 "assigned_rate_limits": { 00:06:46.063 "rw_ios_per_sec": 0, 00:06:46.063 "rw_mbytes_per_sec": 0, 00:06:46.063 "r_mbytes_per_sec": 0, 00:06:46.063 "w_mbytes_per_sec": 0 00:06:46.063 }, 00:06:46.063 "claimed": false, 00:06:46.063 "zoned": false, 00:06:46.063 "supported_io_types": { 00:06:46.063 "read": true, 00:06:46.063 "write": true, 00:06:46.063 "unmap": true, 00:06:46.063 "flush": false, 00:06:46.063 "reset": true, 00:06:46.063 "nvme_admin": false, 00:06:46.063 "nvme_io": false, 00:06:46.063 "nvme_io_md": false, 00:06:46.063 "write_zeroes": true, 00:06:46.063 "zcopy": false, 00:06:46.063 "get_zone_info": false, 00:06:46.063 "zone_management": false, 00:06:46.063 "zone_append": false, 00:06:46.063 "compare": false, 00:06:46.063 "compare_and_write": false, 00:06:46.063 "abort": false, 00:06:46.063 "seek_hole": true, 00:06:46.063 "seek_data": true, 00:06:46.063 "copy": false, 00:06:46.063 "nvme_iov_md": false 00:06:46.063 }, 00:06:46.063 "driver_specific": { 00:06:46.063 "lvol": { 00:06:46.063 "lvol_store_uuid": "efda482c-9c9d-45b0-8149-551493a27741", 00:06:46.063 "base_bdev": "aio_bdev", 00:06:46.063 "thin_provision": false, 00:06:46.063 "num_allocated_clusters": 38, 00:06:46.063 "snapshot": false, 00:06:46.063 "clone": false, 00:06:46.063 "esnap_clone": false 00:06:46.063 } 00:06:46.063 } 00:06:46.063 } 00:06:46.063 ] 00:06:46.063 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:06:46.063 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u efda482c-9c9d-45b0-8149-551493a27741 00:06:46.063 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:06:46.063 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:06:46.063 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:06:46.063 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u efda482c-9c9d-45b0-8149-551493a27741 00:06:46.320 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:06:46.320 05:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a9ba0d4c-3fd7-4417-baf1-fd89722f913f 00:06:46.579 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u efda482c-9c9d-45b0-8149-551493a27741 00:06:46.837 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:46.837 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:46.837 00:06:46.837 real 0m16.975s 00:06:46.837 user 0m43.938s 00:06:46.837 sys 0m3.698s 00:06:46.837 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.837 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:46.837 ************************************ 00:06:46.837 END TEST lvs_grow_dirty 00:06:46.837 ************************************ 00:06:46.837 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:06:46.837 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:06:46.837 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:06:46.837 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:06:46.837 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:06:47.094 nvmf_trace.0 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:47.094 rmmod nvme_tcp 00:06:47.094 rmmod nvme_fabrics 00:06:47.094 rmmod nvme_keyring 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3438000 ']' 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3438000 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3438000 ']' 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3438000 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3438000 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3438000' 00:06:47.094 killing process with pid 3438000 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3438000 00:06:47.094 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3438000 00:06:47.352 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:47.352 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:47.352 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:47.352 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:06:47.352 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:06:47.352 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:47.352 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:06:47.352 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:47.352 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:47.352 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.352 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:47.352 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.883 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:49.883 00:06:49.883 real 0m41.503s 00:06:49.883 user 1m4.664s 00:06:49.883 sys 0m9.829s 00:06:49.883 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.883 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:49.883 ************************************ 00:06:49.883 END TEST nvmf_lvs_grow 00:06:49.883 ************************************ 00:06:49.883 05:01:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:06:49.883 05:01:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:49.883 05:01:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.883 05:01:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:49.883 ************************************ 00:06:49.883 START TEST nvmf_bdev_io_wait 00:06:49.883 ************************************ 00:06:49.883 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:06:49.883 * Looking for test storage... 00:06:49.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:49.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.883 --rc genhtml_branch_coverage=1 00:06:49.883 --rc genhtml_function_coverage=1 00:06:49.883 --rc genhtml_legend=1 00:06:49.883 --rc geninfo_all_blocks=1 00:06:49.883 --rc geninfo_unexecuted_blocks=1 00:06:49.883 00:06:49.883 ' 00:06:49.883 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:49.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.883 --rc genhtml_branch_coverage=1 00:06:49.883 --rc genhtml_function_coverage=1 00:06:49.883 --rc genhtml_legend=1 00:06:49.883 --rc geninfo_all_blocks=1 00:06:49.883 --rc geninfo_unexecuted_blocks=1 00:06:49.883 00:06:49.883 ' 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:49.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.884 --rc genhtml_branch_coverage=1 00:06:49.884 --rc genhtml_function_coverage=1 00:06:49.884 --rc genhtml_legend=1 00:06:49.884 --rc geninfo_all_blocks=1 00:06:49.884 --rc geninfo_unexecuted_blocks=1 00:06:49.884 00:06:49.884 ' 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:49.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.884 --rc genhtml_branch_coverage=1 00:06:49.884 --rc genhtml_function_coverage=1 00:06:49.884 --rc genhtml_legend=1 00:06:49.884 --rc geninfo_all_blocks=1 00:06:49.884 --rc geninfo_unexecuted_blocks=1 00:06:49.884 00:06:49.884 ' 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:49.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:06:49.884 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:55.148 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:55.148 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:06:55.148 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:55.148 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:55.148 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:55.148 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:55.148 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:55.148 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:06:55.148 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:55.148 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:06:55.148 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:06:55.148 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:06:55.148 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:06:55.148 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:06:55.148 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:06:55.148 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:55.148 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:55.148 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:55.148 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:55.149 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:55.149 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:55.149 Found net devices under 0000:86:00.0: cvl_0_0 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:55.149 Found net devices under 0000:86:00.1: cvl_0_1 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:55.149 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:55.407 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:55.407 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:55.407 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:55.407 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:55.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:55.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:06:55.408 00:06:55.408 --- 10.0.0.2 ping statistics --- 00:06:55.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.408 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:55.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:55.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:06:55.408 00:06:55.408 --- 10.0.0.1 ping statistics --- 00:06:55.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.408 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3442060 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3442060 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3442060 ']' 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.408 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:55.408 [2024-12-09 05:01:32.003722] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:06:55.408 [2024-12-09 05:01:32.003768] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.666 [2024-12-09 05:01:32.074428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.666 [2024-12-09 05:01:32.118446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:55.666 [2024-12-09 05:01:32.118486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:55.666 [2024-12-09 05:01:32.118493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:55.666 [2024-12-09 05:01:32.118499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:55.666 [2024-12-09 05:01:32.118504] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:55.666 [2024-12-09 05:01:32.120011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.666 [2024-12-09 05:01:32.120033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.666 [2024-12-09 05:01:32.120120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.666 [2024-12-09 05:01:32.120122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.666 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.666 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:06:55.666 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:55.666 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:55.666 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:55.666 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.666 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:06:55.666 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.666 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:55.667 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.667 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:06:55.667 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.667 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:55.667 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.667 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:55.667 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.667 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:55.667 [2024-12-09 05:01:32.276425] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.667 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.667 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:06:55.667 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.667 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:55.667 Malloc0 00:06:55.667 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.667 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:55.667 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.667 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:55.925 [2024-12-09 05:01:32.323770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3442200 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3442203 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:55.925 { 00:06:55.925 "params": { 00:06:55.925 "name": "Nvme$subsystem", 00:06:55.925 "trtype": "$TEST_TRANSPORT", 00:06:55.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:55.925 "adrfam": "ipv4", 00:06:55.925 "trsvcid": "$NVMF_PORT", 00:06:55.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:55.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:55.925 "hdgst": ${hdgst:-false}, 00:06:55.925 "ddgst": ${ddgst:-false} 00:06:55.925 }, 00:06:55.925 "method": "bdev_nvme_attach_controller" 00:06:55.925 } 00:06:55.925 EOF 00:06:55.925 )") 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3442206 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:06:55.925 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3442210 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:55.926 { 00:06:55.926 "params": { 00:06:55.926 "name": "Nvme$subsystem", 00:06:55.926 "trtype": "$TEST_TRANSPORT", 00:06:55.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:55.926 "adrfam": "ipv4", 00:06:55.926 "trsvcid": "$NVMF_PORT", 00:06:55.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:55.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:55.926 "hdgst": ${hdgst:-false}, 00:06:55.926 "ddgst": ${ddgst:-false} 00:06:55.926 }, 00:06:55.926 "method": "bdev_nvme_attach_controller" 00:06:55.926 } 00:06:55.926 EOF 00:06:55.926 )") 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:55.926 { 00:06:55.926 "params": { 00:06:55.926 "name": "Nvme$subsystem", 00:06:55.926 "trtype": "$TEST_TRANSPORT", 00:06:55.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:55.926 "adrfam": "ipv4", 00:06:55.926 "trsvcid": "$NVMF_PORT", 00:06:55.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:55.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:55.926 "hdgst": ${hdgst:-false}, 00:06:55.926 "ddgst": ${ddgst:-false} 00:06:55.926 }, 00:06:55.926 "method": "bdev_nvme_attach_controller" 00:06:55.926 } 00:06:55.926 EOF 00:06:55.926 )") 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:55.926 { 00:06:55.926 "params": { 00:06:55.926 "name": "Nvme$subsystem", 00:06:55.926 "trtype": "$TEST_TRANSPORT", 00:06:55.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:55.926 "adrfam": "ipv4", 00:06:55.926 "trsvcid": "$NVMF_PORT", 00:06:55.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:55.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:55.926 "hdgst": ${hdgst:-false}, 00:06:55.926 "ddgst": ${ddgst:-false} 00:06:55.926 }, 00:06:55.926 "method": "bdev_nvme_attach_controller" 00:06:55.926 } 00:06:55.926 EOF 00:06:55.926 )") 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3442200 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:55.926 "params": { 00:06:55.926 "name": "Nvme1", 00:06:55.926 "trtype": "tcp", 00:06:55.926 "traddr": "10.0.0.2", 00:06:55.926 "adrfam": "ipv4", 00:06:55.926 "trsvcid": "4420", 00:06:55.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:06:55.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:06:55.926 "hdgst": false, 00:06:55.926 "ddgst": false 00:06:55.926 }, 00:06:55.926 "method": "bdev_nvme_attach_controller" 00:06:55.926 }' 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:55.926 "params": { 00:06:55.926 "name": "Nvme1", 00:06:55.926 "trtype": "tcp", 00:06:55.926 "traddr": "10.0.0.2", 00:06:55.926 "adrfam": "ipv4", 00:06:55.926 "trsvcid": "4420", 00:06:55.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:06:55.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:06:55.926 "hdgst": false, 00:06:55.926 "ddgst": false 00:06:55.926 }, 00:06:55.926 "method": "bdev_nvme_attach_controller" 00:06:55.926 }' 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:55.926 "params": { 00:06:55.926 "name": "Nvme1", 00:06:55.926 "trtype": "tcp", 00:06:55.926 "traddr": "10.0.0.2", 00:06:55.926 "adrfam": "ipv4", 00:06:55.926 "trsvcid": "4420", 00:06:55.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:06:55.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:06:55.926 "hdgst": false, 00:06:55.926 "ddgst": false 00:06:55.926 }, 00:06:55.926 "method": "bdev_nvme_attach_controller" 00:06:55.926 }' 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:06:55.926 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:55.926 "params": { 00:06:55.926 "name": "Nvme1", 00:06:55.926 "trtype": "tcp", 00:06:55.926 "traddr": "10.0.0.2", 00:06:55.926 "adrfam": "ipv4", 00:06:55.926 "trsvcid": "4420", 00:06:55.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:06:55.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:06:55.926 "hdgst": false, 00:06:55.926 "ddgst": false 00:06:55.926 }, 00:06:55.926 "method": "bdev_nvme_attach_controller" 00:06:55.926 }' 00:06:55.926 [2024-12-09 05:01:32.375189] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:06:55.926 [2024-12-09 05:01:32.375241] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:06:55.926 [2024-12-09 05:01:32.375644] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:06:55.927 [2024-12-09 05:01:32.375687] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:06:55.927 [2024-12-09 05:01:32.379744] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:06:55.927 [2024-12-09 05:01:32.379786] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:06:55.927 [2024-12-09 05:01:32.381493] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:06:55.927 [2024-12-09 05:01:32.381534] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:06:55.927 [2024-12-09 05:01:32.569335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.185 [2024-12-09 05:01:32.612322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:56.185 [2024-12-09 05:01:32.662731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.185 [2024-12-09 05:01:32.717010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.185 [2024-12-09 05:01:32.722950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:06:56.185 [2024-12-09 05:01:32.759899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:06:56.185 [2024-12-09 05:01:32.771233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.185 [2024-12-09 05:01:32.814691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:06:56.443 Running I/O for 1 seconds... 00:06:56.443 Running I/O for 1 seconds... 00:06:56.443 Running I/O for 1 seconds... 00:06:56.443 Running I/O for 1 seconds... 00:06:57.377 235624.00 IOPS, 920.41 MiB/s 00:06:57.377 Latency(us) 00:06:57.377 [2024-12-09T04:01:34.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:57.377 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:06:57.377 Nvme1n1 : 1.00 235258.41 918.98 0.00 0.00 541.66 227.95 1538.67 00:06:57.377 [2024-12-09T04:01:34.023Z] =================================================================================================================== 00:06:57.377 [2024-12-09T04:01:34.023Z] Total : 235258.41 918.98 0.00 0.00 541.66 227.95 1538.67 00:06:57.636 7418.00 IOPS, 28.98 MiB/s 00:06:57.636 Latency(us) 00:06:57.636 [2024-12-09T04:01:34.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:57.636 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:06:57.636 Nvme1n1 : 1.02 7416.22 28.97 0.00 0.00 17065.57 5014.93 25302.59 00:06:57.636 [2024-12-09T04:01:34.282Z] =================================================================================================================== 00:06:57.636 [2024-12-09T04:01:34.282Z] Total : 7416.22 28.97 0.00 0.00 17065.57 5014.93 25302.59 00:06:57.636 10894.00 IOPS, 42.55 MiB/s 00:06:57.636 Latency(us) 00:06:57.636 [2024-12-09T04:01:34.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:57.636 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:06:57.636 Nvme1n1 : 1.01 10940.51 42.74 0.00 0.00 11653.83 6439.62 23137.06 00:06:57.636 [2024-12-09T04:01:34.282Z] =================================================================================================================== 00:06:57.636 [2024-12-09T04:01:34.282Z] Total : 10940.51 42.74 0.00 0.00 11653.83 6439.62 23137.06 00:06:57.636 7172.00 IOPS, 28.02 MiB/s 00:06:57.636 Latency(us) 00:06:57.636 [2024-12-09T04:01:34.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:57.636 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:06:57.636 Nvme1n1 : 1.00 7286.90 28.46 0.00 0.00 17528.78 2664.18 40347.38 00:06:57.636 [2024-12-09T04:01:34.282Z] =================================================================================================================== 00:06:57.636 [2024-12-09T04:01:34.282Z] Total : 7286.90 28.46 0.00 0.00 17528.78 2664.18 40347.38 00:06:57.637 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3442203 00:06:57.637 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3442206 00:06:57.637 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3442210 00:06:57.637 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:57.637 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.637 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:57.637 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.637 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:06:57.637 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:06:57.637 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:57.637 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:06:57.637 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:57.637 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:06:57.637 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:57.637 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:57.637 rmmod nvme_tcp 00:06:57.637 rmmod nvme_fabrics 00:06:57.896 rmmod nvme_keyring 00:06:57.896 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:57.896 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:06:57.896 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:06:57.896 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3442060 ']' 00:06:57.896 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3442060 00:06:57.896 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3442060 ']' 00:06:57.896 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3442060 00:06:57.896 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:06:57.896 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.896 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3442060 00:06:57.896 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.896 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.896 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3442060' 00:06:57.896 killing process with pid 3442060 00:06:57.896 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3442060 00:06:57.896 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3442060 00:06:58.155 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:58.155 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:58.155 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:58.155 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:06:58.155 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:06:58.155 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:58.155 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:06:58.155 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:58.155 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:58.155 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.155 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:58.155 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.059 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:00.059 00:07:00.059 real 0m10.655s 00:07:00.059 user 0m16.851s 00:07:00.059 sys 0m5.899s 00:07:00.059 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.059 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:00.059 ************************************ 00:07:00.059 END TEST nvmf_bdev_io_wait 00:07:00.059 ************************************ 00:07:00.059 05:01:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:00.059 05:01:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:00.059 05:01:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.059 05:01:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:00.059 ************************************ 00:07:00.059 START TEST nvmf_queue_depth 00:07:00.059 ************************************ 00:07:00.059 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:00.319 * Looking for test storage... 00:07:00.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.319 --rc genhtml_branch_coverage=1 00:07:00.319 --rc genhtml_function_coverage=1 00:07:00.319 --rc genhtml_legend=1 00:07:00.319 --rc geninfo_all_blocks=1 00:07:00.319 --rc geninfo_unexecuted_blocks=1 00:07:00.319 00:07:00.319 ' 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.319 --rc genhtml_branch_coverage=1 00:07:00.319 --rc genhtml_function_coverage=1 00:07:00.319 --rc genhtml_legend=1 00:07:00.319 --rc geninfo_all_blocks=1 00:07:00.319 --rc geninfo_unexecuted_blocks=1 00:07:00.319 00:07:00.319 ' 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.319 --rc genhtml_branch_coverage=1 00:07:00.319 --rc genhtml_function_coverage=1 00:07:00.319 --rc genhtml_legend=1 00:07:00.319 --rc geninfo_all_blocks=1 00:07:00.319 --rc geninfo_unexecuted_blocks=1 00:07:00.319 00:07:00.319 ' 00:07:00.319 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.320 --rc genhtml_branch_coverage=1 00:07:00.320 --rc genhtml_function_coverage=1 00:07:00.320 --rc genhtml_legend=1 00:07:00.320 --rc geninfo_all_blocks=1 00:07:00.320 --rc geninfo_unexecuted_blocks=1 00:07:00.320 00:07:00.320 ' 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:00.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:00.320 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:06.884 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:06.884 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:06.884 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:06.884 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:06.884 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:06.884 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:06.884 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:06.884 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:06.884 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:06.884 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:06.884 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:06.884 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:06.884 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:06.884 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:06.884 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:06.884 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:06.884 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:06.884 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:06.884 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:06.884 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:06.885 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:06.885 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:06.885 Found net devices under 0000:86:00.0: cvl_0_0 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:06.885 Found net devices under 0000:86:00.1: cvl_0_1 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:06.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:07:06.885 00:07:06.885 --- 10.0.0.2 ping statistics --- 00:07:06.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.885 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:06.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:07:06.885 00:07:06.885 --- 10.0.0.1 ping statistics --- 00:07:06.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.885 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:06.885 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:06.886 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:06.886 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:06.886 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.886 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:06.886 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3446100 00:07:06.886 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:06.886 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3446100 00:07:06.886 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3446100 ']' 00:07:06.886 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.886 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.886 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.886 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.886 05:01:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:06.886 [2024-12-09 05:01:42.810386] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:07:06.886 [2024-12-09 05:01:42.810431] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.886 [2024-12-09 05:01:42.879088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.886 [2024-12-09 05:01:42.920977] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.886 [2024-12-09 05:01:42.921017] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.886 [2024-12-09 05:01:42.921025] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.886 [2024-12-09 05:01:42.921031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.886 [2024-12-09 05:01:42.921036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.886 [2024-12-09 05:01:42.921627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:06.886 [2024-12-09 05:01:43.054704] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:06.886 Malloc0 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:06.886 [2024-12-09 05:01:43.105286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3446123 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3446123 /var/tmp/bdevperf.sock 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3446123 ']' 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:06.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:06.886 [2024-12-09 05:01:43.155567] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:07:06.886 [2024-12-09 05:01:43.155607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3446123 ] 00:07:06.886 [2024-12-09 05:01:43.219387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.886 [2024-12-09 05:01:43.260330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:06.886 NVMe0n1 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.886 05:01:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:06.886 Running I/O for 10 seconds... 00:07:08.966 11271.00 IOPS, 44.03 MiB/s [2024-12-09T04:01:46.545Z] 11757.00 IOPS, 45.93 MiB/s [2024-12-09T04:01:47.919Z] 11745.00 IOPS, 45.88 MiB/s [2024-12-09T04:01:48.851Z] 11797.50 IOPS, 46.08 MiB/s [2024-12-09T04:01:49.792Z] 11868.40 IOPS, 46.36 MiB/s [2024-12-09T04:01:50.727Z] 11929.00 IOPS, 46.60 MiB/s [2024-12-09T04:01:51.661Z] 11904.57 IOPS, 46.50 MiB/s [2024-12-09T04:01:52.597Z] 11934.62 IOPS, 46.62 MiB/s [2024-12-09T04:01:53.975Z] 11959.67 IOPS, 46.72 MiB/s [2024-12-09T04:01:53.975Z] 11985.60 IOPS, 46.82 MiB/s 00:07:17.329 Latency(us) 00:07:17.329 [2024-12-09T04:01:53.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.329 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:17.329 Verification LBA range: start 0x0 length 0x4000 00:07:17.329 NVMe0n1 : 10.05 12023.86 46.97 0.00 0.00 84871.45 7265.95 56303.97 00:07:17.329 [2024-12-09T04:01:53.975Z] =================================================================================================================== 00:07:17.329 [2024-12-09T04:01:53.975Z] Total : 12023.86 46.97 0.00 0.00 84871.45 7265.95 56303.97 00:07:17.329 { 00:07:17.329 "results": [ 00:07:17.329 { 00:07:17.329 "job": "NVMe0n1", 00:07:17.329 "core_mask": "0x1", 00:07:17.329 "workload": "verify", 00:07:17.329 "status": "finished", 00:07:17.329 "verify_range": { 00:07:17.329 "start": 0, 00:07:17.329 "length": 16384 00:07:17.329 }, 00:07:17.329 "queue_depth": 1024, 00:07:17.329 "io_size": 4096, 00:07:17.329 "runtime": 10.048102, 00:07:17.329 "iops": 12023.86281508687, 00:07:17.329 "mibps": 46.968214121433085, 00:07:17.329 "io_failed": 0, 00:07:17.329 "io_timeout": 0, 00:07:17.329 "avg_latency_us": 84871.45060619529, 00:07:17.329 "min_latency_us": 7265.947826086956, 00:07:17.329 "max_latency_us": 56303.97217391304 00:07:17.329 } 00:07:17.329 ], 00:07:17.329 "core_count": 1 00:07:17.329 } 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3446123 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3446123 ']' 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3446123 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3446123 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3446123' 00:07:17.329 killing process with pid 3446123 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3446123 00:07:17.329 Received shutdown signal, test time was about 10.000000 seconds 00:07:17.329 00:07:17.329 Latency(us) 00:07:17.329 [2024-12-09T04:01:53.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.329 [2024-12-09T04:01:53.975Z] =================================================================================================================== 00:07:17.329 [2024-12-09T04:01:53.975Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3446123 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:17.329 rmmod nvme_tcp 00:07:17.329 rmmod nvme_fabrics 00:07:17.329 rmmod nvme_keyring 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3446100 ']' 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3446100 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3446100 ']' 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3446100 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.329 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3446100 00:07:17.596 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:17.596 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:17.596 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3446100' 00:07:17.596 killing process with pid 3446100 00:07:17.596 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3446100 00:07:17.596 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3446100 00:07:17.596 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:17.596 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:17.596 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:17.596 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:17.596 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:17.596 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:17.596 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:17.596 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:17.596 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:17.596 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.596 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:17.596 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.128 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:20.128 00:07:20.128 real 0m19.590s 00:07:20.128 user 0m23.001s 00:07:20.128 sys 0m5.958s 00:07:20.128 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.128 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:20.129 ************************************ 00:07:20.129 END TEST nvmf_queue_depth 00:07:20.129 ************************************ 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:20.129 ************************************ 00:07:20.129 START TEST nvmf_target_multipath 00:07:20.129 ************************************ 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:20.129 * Looking for test storage... 00:07:20.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:20.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.129 --rc genhtml_branch_coverage=1 00:07:20.129 --rc genhtml_function_coverage=1 00:07:20.129 --rc genhtml_legend=1 00:07:20.129 --rc geninfo_all_blocks=1 00:07:20.129 --rc geninfo_unexecuted_blocks=1 00:07:20.129 00:07:20.129 ' 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:20.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.129 --rc genhtml_branch_coverage=1 00:07:20.129 --rc genhtml_function_coverage=1 00:07:20.129 --rc genhtml_legend=1 00:07:20.129 --rc geninfo_all_blocks=1 00:07:20.129 --rc geninfo_unexecuted_blocks=1 00:07:20.129 00:07:20.129 ' 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:20.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.129 --rc genhtml_branch_coverage=1 00:07:20.129 --rc genhtml_function_coverage=1 00:07:20.129 --rc genhtml_legend=1 00:07:20.129 --rc geninfo_all_blocks=1 00:07:20.129 --rc geninfo_unexecuted_blocks=1 00:07:20.129 00:07:20.129 ' 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:20.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.129 --rc genhtml_branch_coverage=1 00:07:20.129 --rc genhtml_function_coverage=1 00:07:20.129 --rc genhtml_legend=1 00:07:20.129 --rc geninfo_all_blocks=1 00:07:20.129 --rc geninfo_unexecuted_blocks=1 00:07:20.129 00:07:20.129 ' 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.129 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:20.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:20.130 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:25.395 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:25.396 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:25.396 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:25.396 Found net devices under 0000:86:00.0: cvl_0_0 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:25.396 Found net devices under 0000:86:00.1: cvl_0_1 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:25.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:07:25.396 00:07:25.396 --- 10.0.0.2 ping statistics --- 00:07:25.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.396 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:25.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:07:25.396 00:07:25.396 --- 10.0.0.1 ping statistics --- 00:07:25.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.396 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:07:25.396 only one NIC for nvmf test 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:25.396 05:02:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:25.396 05:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:25.396 05:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:25.396 05:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:25.396 rmmod nvme_tcp 00:07:25.396 rmmod nvme_fabrics 00:07:25.396 rmmod nvme_keyring 00:07:25.656 05:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:25.656 05:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:25.656 05:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:25.656 05:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:25.656 05:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:25.656 05:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:25.656 05:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:25.656 05:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:25.656 05:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:25.656 05:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:25.656 05:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:25.656 05:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:25.656 05:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:25.656 05:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.656 05:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.656 05:02:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:27.562 00:07:27.562 real 0m7.804s 00:07:27.562 user 0m1.656s 00:07:27.562 sys 0m4.144s 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:27.562 ************************************ 00:07:27.562 END TEST nvmf_target_multipath 00:07:27.562 ************************************ 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.562 05:02:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:27.822 ************************************ 00:07:27.822 START TEST nvmf_zcopy 00:07:27.822 ************************************ 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:27.822 * Looking for test storage... 00:07:27.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:27.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.822 --rc genhtml_branch_coverage=1 00:07:27.822 --rc genhtml_function_coverage=1 00:07:27.822 --rc genhtml_legend=1 00:07:27.822 --rc geninfo_all_blocks=1 00:07:27.822 --rc geninfo_unexecuted_blocks=1 00:07:27.822 00:07:27.822 ' 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:27.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.822 --rc genhtml_branch_coverage=1 00:07:27.822 --rc genhtml_function_coverage=1 00:07:27.822 --rc genhtml_legend=1 00:07:27.822 --rc geninfo_all_blocks=1 00:07:27.822 --rc geninfo_unexecuted_blocks=1 00:07:27.822 00:07:27.822 ' 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:27.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.822 --rc genhtml_branch_coverage=1 00:07:27.822 --rc genhtml_function_coverage=1 00:07:27.822 --rc genhtml_legend=1 00:07:27.822 --rc geninfo_all_blocks=1 00:07:27.822 --rc geninfo_unexecuted_blocks=1 00:07:27.822 00:07:27.822 ' 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:27.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.822 --rc genhtml_branch_coverage=1 00:07:27.822 --rc genhtml_function_coverage=1 00:07:27.822 --rc genhtml_legend=1 00:07:27.822 --rc geninfo_all_blocks=1 00:07:27.822 --rc geninfo_unexecuted_blocks=1 00:07:27.822 00:07:27.822 ' 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.822 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:27.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:07:27.823 05:02:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:34.389 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:34.389 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:34.389 Found net devices under 0000:86:00.0: cvl_0_0 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:34.389 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:34.390 Found net devices under 0000:86:00.1: cvl_0_1 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:34.390 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:34.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:07:34.390 00:07:34.390 --- 10.0.0.2 ping statistics --- 00:07:34.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.390 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:34.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:07:34.390 00:07:34.390 --- 10.0.0.1 ping statistics --- 00:07:34.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.390 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3455028 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3455028 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3455028 ']' 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:34.390 [2024-12-09 05:02:10.286431] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:07:34.390 [2024-12-09 05:02:10.286480] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.390 [2024-12-09 05:02:10.356996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.390 [2024-12-09 05:02:10.397965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.390 [2024-12-09 05:02:10.397997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.390 [2024-12-09 05:02:10.398011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.390 [2024-12-09 05:02:10.398017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.390 [2024-12-09 05:02:10.398022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.390 [2024-12-09 05:02:10.398578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:34.390 [2024-12-09 05:02:10.536006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:34.390 [2024-12-09 05:02:10.556210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:34.390 malloc0 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:07:34.390 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:07:34.391 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:07:34.391 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:34.391 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:34.391 { 00:07:34.391 "params": { 00:07:34.391 "name": "Nvme$subsystem", 00:07:34.391 "trtype": "$TEST_TRANSPORT", 00:07:34.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:34.391 "adrfam": "ipv4", 00:07:34.391 "trsvcid": "$NVMF_PORT", 00:07:34.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:34.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:34.391 "hdgst": ${hdgst:-false}, 00:07:34.391 "ddgst": ${ddgst:-false} 00:07:34.391 }, 00:07:34.391 "method": "bdev_nvme_attach_controller" 00:07:34.391 } 00:07:34.391 EOF 00:07:34.391 )") 00:07:34.391 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:07:34.391 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:07:34.391 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:07:34.391 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:34.391 "params": { 00:07:34.391 "name": "Nvme1", 00:07:34.391 "trtype": "tcp", 00:07:34.391 "traddr": "10.0.0.2", 00:07:34.391 "adrfam": "ipv4", 00:07:34.391 "trsvcid": "4420", 00:07:34.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:34.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:34.391 "hdgst": false, 00:07:34.391 "ddgst": false 00:07:34.391 }, 00:07:34.391 "method": "bdev_nvme_attach_controller" 00:07:34.391 }' 00:07:34.391 [2024-12-09 05:02:10.637041] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:07:34.391 [2024-12-09 05:02:10.637085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455058 ] 00:07:34.391 [2024-12-09 05:02:10.701482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.391 [2024-12-09 05:02:10.744617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.391 Running I/O for 10 seconds... 00:07:36.703 8428.00 IOPS, 65.84 MiB/s [2024-12-09T04:02:14.285Z] 8488.50 IOPS, 66.32 MiB/s [2024-12-09T04:02:15.224Z] 8530.33 IOPS, 66.64 MiB/s [2024-12-09T04:02:16.159Z] 8535.00 IOPS, 66.68 MiB/s [2024-12-09T04:02:17.145Z] 8547.20 IOPS, 66.78 MiB/s [2024-12-09T04:02:18.155Z] 8558.33 IOPS, 66.86 MiB/s [2024-12-09T04:02:19.091Z] 8557.43 IOPS, 66.85 MiB/s [2024-12-09T04:02:20.467Z] 8563.75 IOPS, 66.90 MiB/s [2024-12-09T04:02:21.402Z] 8562.00 IOPS, 66.89 MiB/s [2024-12-09T04:02:21.402Z] 8560.40 IOPS, 66.88 MiB/s 00:07:44.756 Latency(us) 00:07:44.756 [2024-12-09T04:02:21.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.756 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:07:44.756 Verification LBA range: start 0x0 length 0x1000 00:07:44.756 Nvme1n1 : 10.01 8561.52 66.89 0.00 0.00 14907.10 1588.54 25758.50 00:07:44.756 [2024-12-09T04:02:21.402Z] =================================================================================================================== 00:07:44.756 [2024-12-09T04:02:21.402Z] Total : 8561.52 66.89 0.00 0.00 14907.10 1588.54 25758.50 00:07:44.756 05:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3456891 00:07:44.756 05:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:07:44.756 05:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:44.756 05:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:07:44.756 05:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:07:44.756 05:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:07:44.756 05:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:07:44.756 05:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:44.756 05:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:44.756 { 00:07:44.756 "params": { 00:07:44.756 "name": "Nvme$subsystem", 00:07:44.756 "trtype": "$TEST_TRANSPORT", 00:07:44.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:44.757 "adrfam": "ipv4", 00:07:44.757 "trsvcid": "$NVMF_PORT", 00:07:44.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:44.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:44.757 "hdgst": ${hdgst:-false}, 00:07:44.757 "ddgst": ${ddgst:-false} 00:07:44.757 }, 00:07:44.757 "method": "bdev_nvme_attach_controller" 00:07:44.757 } 00:07:44.757 EOF 00:07:44.757 )") 00:07:44.757 05:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:07:44.757 [2024-12-09 05:02:21.263220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.757 [2024-12-09 05:02:21.263254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.757 05:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:07:44.757 05:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:07:44.757 05:02:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:44.757 "params": { 00:07:44.757 "name": "Nvme1", 00:07:44.757 "trtype": "tcp", 00:07:44.757 "traddr": "10.0.0.2", 00:07:44.757 "adrfam": "ipv4", 00:07:44.757 "trsvcid": "4420", 00:07:44.757 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:44.757 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:44.757 "hdgst": false, 00:07:44.757 "ddgst": false 00:07:44.757 }, 00:07:44.757 "method": "bdev_nvme_attach_controller" 00:07:44.757 }' 00:07:44.757 [2024-12-09 05:02:21.275215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.757 [2024-12-09 05:02:21.275229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.757 [2024-12-09 05:02:21.287242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.757 [2024-12-09 05:02:21.287252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.757 [2024-12-09 05:02:21.298645] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:07:44.757 [2024-12-09 05:02:21.298684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3456891 ] 00:07:44.757 [2024-12-09 05:02:21.299273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.757 [2024-12-09 05:02:21.299283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.757 [2024-12-09 05:02:21.311304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.757 [2024-12-09 05:02:21.311314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.757 [2024-12-09 05:02:21.323344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.757 [2024-12-09 05:02:21.323358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.757 [2024-12-09 05:02:21.335368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.757 [2024-12-09 05:02:21.335377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.757 [2024-12-09 05:02:21.347398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.757 [2024-12-09 05:02:21.347408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.757 [2024-12-09 05:02:21.359429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.757 [2024-12-09 05:02:21.359438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.757 [2024-12-09 05:02:21.362362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.757 [2024-12-09 05:02:21.371465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.757 [2024-12-09 05:02:21.371477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.757 [2024-12-09 05:02:21.383495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.757 [2024-12-09 05:02:21.383506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:44.757 [2024-12-09 05:02:21.395527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:44.757 [2024-12-09 05:02:21.395537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.016 [2024-12-09 05:02:21.405325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.016 [2024-12-09 05:02:21.407561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.016 [2024-12-09 05:02:21.407572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.016 [2024-12-09 05:02:21.419600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.016 [2024-12-09 05:02:21.419622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.016 [2024-12-09 05:02:21.431630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.016 [2024-12-09 05:02:21.431647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.016 [2024-12-09 05:02:21.443658] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.016 [2024-12-09 05:02:21.443670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.016 [2024-12-09 05:02:21.455688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.016 [2024-12-09 05:02:21.455699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.016 [2024-12-09 05:02:21.467724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.016 [2024-12-09 05:02:21.467736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.016 [2024-12-09 05:02:21.479754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.016 [2024-12-09 05:02:21.479764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.016 [2024-12-09 05:02:21.491810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.016 [2024-12-09 05:02:21.491831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.016 [2024-12-09 05:02:21.503828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.016 [2024-12-09 05:02:21.503845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.016 [2024-12-09 05:02:21.515857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.016 [2024-12-09 05:02:21.515872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.016 [2024-12-09 05:02:21.527891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.016 [2024-12-09 05:02:21.527923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.016 [2024-12-09 05:02:21.539916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.016 [2024-12-09 05:02:21.539925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.016 [2024-12-09 05:02:21.551948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.016 [2024-12-09 05:02:21.551958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.016 [2024-12-09 05:02:21.563982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.016 [2024-12-09 05:02:21.563993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.016 [2024-12-09 05:02:21.576022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.016 [2024-12-09 05:02:21.576036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.016 [2024-12-09 05:02:21.588059] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.016 [2024-12-09 05:02:21.588069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.016 [2024-12-09 05:02:21.600089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.016 [2024-12-09 05:02:21.600099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.017 [2024-12-09 05:02:21.612122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.017 [2024-12-09 05:02:21.612131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.017 [2024-12-09 05:02:21.624159] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.017 [2024-12-09 05:02:21.624172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.017 [2024-12-09 05:02:21.636188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.017 [2024-12-09 05:02:21.636198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.017 [2024-12-09 05:02:21.648222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.017 [2024-12-09 05:02:21.648231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.017 [2024-12-09 05:02:21.660256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.017 [2024-12-09 05:02:21.660267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.275 [2024-12-09 05:02:21.712283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.275 [2024-12-09 05:02:21.712301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.275 [2024-12-09 05:02:21.720430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.275 [2024-12-09 05:02:21.720442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.275 Running I/O for 5 seconds... 00:07:45.275 [2024-12-09 05:02:21.735565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.275 [2024-12-09 05:02:21.735584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.275 [2024-12-09 05:02:21.750134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.275 [2024-12-09 05:02:21.750154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.275 [2024-12-09 05:02:21.764312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.275 [2024-12-09 05:02:21.764330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.275 [2024-12-09 05:02:21.778222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.275 [2024-12-09 05:02:21.778240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.275 [2024-12-09 05:02:21.792283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.275 [2024-12-09 05:02:21.792302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.275 [2024-12-09 05:02:21.806420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.275 [2024-12-09 05:02:21.806438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.275 [2024-12-09 05:02:21.820533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.275 [2024-12-09 05:02:21.820551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.275 [2024-12-09 05:02:21.834595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.275 [2024-12-09 05:02:21.834614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.275 [2024-12-09 05:02:21.848777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.275 [2024-12-09 05:02:21.848795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.275 [2024-12-09 05:02:21.862475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.275 [2024-12-09 05:02:21.862494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.275 [2024-12-09 05:02:21.876625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.275 [2024-12-09 05:02:21.876643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.275 [2024-12-09 05:02:21.890801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.275 [2024-12-09 05:02:21.890821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.275 [2024-12-09 05:02:21.905233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.275 [2024-12-09 05:02:21.905251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.275 [2024-12-09 05:02:21.919445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.275 [2024-12-09 05:02:21.919470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.533 [2024-12-09 05:02:21.930886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.533 [2024-12-09 05:02:21.930906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.533 [2024-12-09 05:02:21.945858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.533 [2024-12-09 05:02:21.945876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.533 [2024-12-09 05:02:21.961244] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.533 [2024-12-09 05:02:21.961263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.533 [2024-12-09 05:02:21.975651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.533 [2024-12-09 05:02:21.975669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.533 [2024-12-09 05:02:21.989841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.533 [2024-12-09 05:02:21.989860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.533 [2024-12-09 05:02:22.003963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.533 [2024-12-09 05:02:22.003982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.533 [2024-12-09 05:02:22.017725] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.533 [2024-12-09 05:02:22.017743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.533 [2024-12-09 05:02:22.031976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.533 [2024-12-09 05:02:22.031995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.533 [2024-12-09 05:02:22.046003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.533 [2024-12-09 05:02:22.046027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.533 [2024-12-09 05:02:22.060289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.534 [2024-12-09 05:02:22.060307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.534 [2024-12-09 05:02:22.074192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.534 [2024-12-09 05:02:22.074210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.534 [2024-12-09 05:02:22.087776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.534 [2024-12-09 05:02:22.087795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.534 [2024-12-09 05:02:22.101812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.534 [2024-12-09 05:02:22.101832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.534 [2024-12-09 05:02:22.116037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.534 [2024-12-09 05:02:22.116057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.534 [2024-12-09 05:02:22.130294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.534 [2024-12-09 05:02:22.130314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.534 [2024-12-09 05:02:22.141872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.534 [2024-12-09 05:02:22.141891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.534 [2024-12-09 05:02:22.156600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.534 [2024-12-09 05:02:22.156619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.534 [2024-12-09 05:02:22.170855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.534 [2024-12-09 05:02:22.170874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.791 [2024-12-09 05:02:22.181438] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.792 [2024-12-09 05:02:22.181456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.792 [2024-12-09 05:02:22.196424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.792 [2024-12-09 05:02:22.196449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.792 [2024-12-09 05:02:22.211681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.792 [2024-12-09 05:02:22.211700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.792 [2024-12-09 05:02:22.226050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.792 [2024-12-09 05:02:22.226069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.792 [2024-12-09 05:02:22.236806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.792 [2024-12-09 05:02:22.236825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.792 [2024-12-09 05:02:22.251029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.792 [2024-12-09 05:02:22.251048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.792 [2024-12-09 05:02:22.264925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.792 [2024-12-09 05:02:22.264944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.792 [2024-12-09 05:02:22.278921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.792 [2024-12-09 05:02:22.278941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.792 [2024-12-09 05:02:22.293186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.792 [2024-12-09 05:02:22.293205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.792 [2024-12-09 05:02:22.304844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.792 [2024-12-09 05:02:22.304863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.792 [2024-12-09 05:02:22.319081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.792 [2024-12-09 05:02:22.319101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.792 [2024-12-09 05:02:22.333235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.792 [2024-12-09 05:02:22.333254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.792 [2024-12-09 05:02:22.347220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.792 [2024-12-09 05:02:22.347244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.792 [2024-12-09 05:02:22.361323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.792 [2024-12-09 05:02:22.361341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.792 [2024-12-09 05:02:22.375404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.792 [2024-12-09 05:02:22.375422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.792 [2024-12-09 05:02:22.389519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.792 [2024-12-09 05:02:22.389538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.792 [2024-12-09 05:02:22.403469] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.792 [2024-12-09 05:02:22.403488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.792 [2024-12-09 05:02:22.417704] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.792 [2024-12-09 05:02:22.417723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:45.792 [2024-12-09 05:02:22.431960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:45.792 [2024-12-09 05:02:22.431983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.049 [2024-12-09 05:02:22.443198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.049 [2024-12-09 05:02:22.443217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.049 [2024-12-09 05:02:22.457260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.049 [2024-12-09 05:02:22.457284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.050 [2024-12-09 05:02:22.471093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.050 [2024-12-09 05:02:22.471113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.050 [2024-12-09 05:02:22.485105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.050 [2024-12-09 05:02:22.485123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.050 [2024-12-09 05:02:22.499151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.050 [2024-12-09 05:02:22.499169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.050 [2024-12-09 05:02:22.513114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.050 [2024-12-09 05:02:22.513134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.050 [2024-12-09 05:02:22.527268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.050 [2024-12-09 05:02:22.527287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.050 [2024-12-09 05:02:22.541470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.050 [2024-12-09 05:02:22.541489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.050 [2024-12-09 05:02:22.555654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.050 [2024-12-09 05:02:22.555672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.050 [2024-12-09 05:02:22.569573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.050 [2024-12-09 05:02:22.569593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.050 [2024-12-09 05:02:22.583838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.050 [2024-12-09 05:02:22.583857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.050 [2024-12-09 05:02:22.598192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.050 [2024-12-09 05:02:22.598211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.050 [2024-12-09 05:02:22.611981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.050 [2024-12-09 05:02:22.612007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.050 [2024-12-09 05:02:22.625809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.050 [2024-12-09 05:02:22.625827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.050 [2024-12-09 05:02:22.640039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.050 [2024-12-09 05:02:22.640057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.050 [2024-12-09 05:02:22.653448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.050 [2024-12-09 05:02:22.653467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.050 [2024-12-09 05:02:22.667349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.050 [2024-12-09 05:02:22.667367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.050 [2024-12-09 05:02:22.681824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.050 [2024-12-09 05:02:22.681843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.050 [2024-12-09 05:02:22.692698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.050 [2024-12-09 05:02:22.692716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.308 [2024-12-09 05:02:22.707061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.308 [2024-12-09 05:02:22.707078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.308 [2024-12-09 05:02:22.720728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.308 [2024-12-09 05:02:22.720754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.308 16462.00 IOPS, 128.61 MiB/s [2024-12-09T04:02:22.954Z] [2024-12-09 05:02:22.734842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.308 [2024-12-09 05:02:22.734861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.308 [2024-12-09 05:02:22.748509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.308 [2024-12-09 05:02:22.748527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.308 [2024-12-09 05:02:22.762751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.308 [2024-12-09 05:02:22.762769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.308 [2024-12-09 05:02:22.776627] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.308 [2024-12-09 05:02:22.776645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.308 [2024-12-09 05:02:22.790519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.308 [2024-12-09 05:02:22.790538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.308 [2024-12-09 05:02:22.804360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.308 [2024-12-09 05:02:22.804378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.308 [2024-12-09 05:02:22.818362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.308 [2024-12-09 05:02:22.818380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.308 [2024-12-09 05:02:22.832222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.308 [2024-12-09 05:02:22.832241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.308 [2024-12-09 05:02:22.846795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.308 [2024-12-09 05:02:22.846814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.308 [2024-12-09 05:02:22.860642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.308 [2024-12-09 05:02:22.860660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.308 [2024-12-09 05:02:22.874020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.308 [2024-12-09 05:02:22.874038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.308 [2024-12-09 05:02:22.888167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.308 [2024-12-09 05:02:22.888186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.308 [2024-12-09 05:02:22.901879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.308 [2024-12-09 05:02:22.901897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.308 [2024-12-09 05:02:22.916652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.308 [2024-12-09 05:02:22.916669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.308 [2024-12-09 05:02:22.932560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.308 [2024-12-09 05:02:22.932579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.308 [2024-12-09 05:02:22.946581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.308 [2024-12-09 05:02:22.946599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.566 [2024-12-09 05:02:22.960702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.566 [2024-12-09 05:02:22.960721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.566 [2024-12-09 05:02:22.974971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.566 [2024-12-09 05:02:22.974990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.566 [2024-12-09 05:02:22.988930] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.566 [2024-12-09 05:02:22.988953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.566 [2024-12-09 05:02:23.003177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.566 [2024-12-09 05:02:23.003196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.566 [2024-12-09 05:02:23.014040] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.566 [2024-12-09 05:02:23.014059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.566 [2024-12-09 05:02:23.028598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.566 [2024-12-09 05:02:23.028617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.566 [2024-12-09 05:02:23.042632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.566 [2024-12-09 05:02:23.042651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.566 [2024-12-09 05:02:23.056728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.566 [2024-12-09 05:02:23.056746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.566 [2024-12-09 05:02:23.071029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.566 [2024-12-09 05:02:23.071047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.566 [2024-12-09 05:02:23.081755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.566 [2024-12-09 05:02:23.081773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.566 [2024-12-09 05:02:23.096228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.567 [2024-12-09 05:02:23.096246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.567 [2024-12-09 05:02:23.110085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.567 [2024-12-09 05:02:23.110104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.567 [2024-12-09 05:02:23.124299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.567 [2024-12-09 05:02:23.124317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.567 [2024-12-09 05:02:23.138267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.567 [2024-12-09 05:02:23.138285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.567 [2024-12-09 05:02:23.152256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.567 [2024-12-09 05:02:23.152274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.567 [2024-12-09 05:02:23.165992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.567 [2024-12-09 05:02:23.166016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.567 [2024-12-09 05:02:23.180489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.567 [2024-12-09 05:02:23.180508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.567 [2024-12-09 05:02:23.191250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.567 [2024-12-09 05:02:23.191269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.567 [2024-12-09 05:02:23.205962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.567 [2024-12-09 05:02:23.205981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.825 [2024-12-09 05:02:23.216297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.825 [2024-12-09 05:02:23.216315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.825 [2024-12-09 05:02:23.231017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.825 [2024-12-09 05:02:23.231036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.825 [2024-12-09 05:02:23.244935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.825 [2024-12-09 05:02:23.244955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.825 [2024-12-09 05:02:23.258737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.825 [2024-12-09 05:02:23.258755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.825 [2024-12-09 05:02:23.272810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.825 [2024-12-09 05:02:23.272830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.825 [2024-12-09 05:02:23.287267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.825 [2024-12-09 05:02:23.287286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.825 [2024-12-09 05:02:23.301237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.825 [2024-12-09 05:02:23.301256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.825 [2024-12-09 05:02:23.315296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.825 [2024-12-09 05:02:23.315314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.826 [2024-12-09 05:02:23.329758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.826 [2024-12-09 05:02:23.329776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.826 [2024-12-09 05:02:23.345083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.826 [2024-12-09 05:02:23.345103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.826 [2024-12-09 05:02:23.359412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.826 [2024-12-09 05:02:23.359431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.826 [2024-12-09 05:02:23.373064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.826 [2024-12-09 05:02:23.373083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.826 [2024-12-09 05:02:23.387118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.826 [2024-12-09 05:02:23.387137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.826 [2024-12-09 05:02:23.401381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.826 [2024-12-09 05:02:23.401399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.826 [2024-12-09 05:02:23.415489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.826 [2024-12-09 05:02:23.415507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.826 [2024-12-09 05:02:23.429282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.826 [2024-12-09 05:02:23.429300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.826 [2024-12-09 05:02:23.443013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.826 [2024-12-09 05:02:23.443031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:46.826 [2024-12-09 05:02:23.457622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:46.826 [2024-12-09 05:02:23.457639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.084 [2024-12-09 05:02:23.473222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.084 [2024-12-09 05:02:23.473241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.084 [2024-12-09 05:02:23.487221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.084 [2024-12-09 05:02:23.487241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.084 [2024-12-09 05:02:23.501256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.084 [2024-12-09 05:02:23.501275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.084 [2024-12-09 05:02:23.515274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.084 [2024-12-09 05:02:23.515294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.084 [2024-12-09 05:02:23.529545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.084 [2024-12-09 05:02:23.529565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.084 [2024-12-09 05:02:23.543636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.084 [2024-12-09 05:02:23.543656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.084 [2024-12-09 05:02:23.558012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.084 [2024-12-09 05:02:23.558031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.084 [2024-12-09 05:02:23.568775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.084 [2024-12-09 05:02:23.568793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.084 [2024-12-09 05:02:23.582993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.084 [2024-12-09 05:02:23.583018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.084 [2024-12-09 05:02:23.596696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.084 [2024-12-09 05:02:23.596715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.084 [2024-12-09 05:02:23.610962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.084 [2024-12-09 05:02:23.610987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.084 [2024-12-09 05:02:23.624844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.084 [2024-12-09 05:02:23.624865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.084 [2024-12-09 05:02:23.638749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.084 [2024-12-09 05:02:23.638769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.084 [2024-12-09 05:02:23.653226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.084 [2024-12-09 05:02:23.653244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.084 [2024-12-09 05:02:23.668649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.084 [2024-12-09 05:02:23.668668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.084 [2024-12-09 05:02:23.683231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.084 [2024-12-09 05:02:23.683249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.084 [2024-12-09 05:02:23.694054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.084 [2024-12-09 05:02:23.694072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.084 [2024-12-09 05:02:23.709084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.084 [2024-12-09 05:02:23.709104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.084 [2024-12-09 05:02:23.719925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.084 [2024-12-09 05:02:23.719944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.341 16502.00 IOPS, 128.92 MiB/s [2024-12-09T04:02:23.987Z] [2024-12-09 05:02:23.734634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.341 [2024-12-09 05:02:23.734653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.341 [2024-12-09 05:02:23.748247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.341 [2024-12-09 05:02:23.748266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.341 [2024-12-09 05:02:23.762711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.341 [2024-12-09 05:02:23.762735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.341 [2024-12-09 05:02:23.776499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.341 [2024-12-09 05:02:23.776517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.341 [2024-12-09 05:02:23.790689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.341 [2024-12-09 05:02:23.790708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.341 [2024-12-09 05:02:23.801797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.341 [2024-12-09 05:02:23.801817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.341 [2024-12-09 05:02:23.816312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.341 [2024-12-09 05:02:23.816331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.341 [2024-12-09 05:02:23.830542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.341 [2024-12-09 05:02:23.830561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.341 [2024-12-09 05:02:23.841965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.341 [2024-12-09 05:02:23.841984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.341 [2024-12-09 05:02:23.856512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.341 [2024-12-09 05:02:23.856532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.341 [2024-12-09 05:02:23.870350] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.341 [2024-12-09 05:02:23.870369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.341 [2024-12-09 05:02:23.885141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.341 [2024-12-09 05:02:23.885159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.341 [2024-12-09 05:02:23.900345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.341 [2024-12-09 05:02:23.900364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.341 [2024-12-09 05:02:23.914739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.341 [2024-12-09 05:02:23.914757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.341 [2024-12-09 05:02:23.928923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.341 [2024-12-09 05:02:23.928941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.341 [2024-12-09 05:02:23.942919] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.341 [2024-12-09 05:02:23.942937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.341 [2024-12-09 05:02:23.956932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.341 [2024-12-09 05:02:23.956951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.341 [2024-12-09 05:02:23.971339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.341 [2024-12-09 05:02:23.971357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.341 [2024-12-09 05:02:23.982828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.341 [2024-12-09 05:02:23.982846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.599 [2024-12-09 05:02:23.997417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.599 [2024-12-09 05:02:23.997436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.599 [2024-12-09 05:02:24.008784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.599 [2024-12-09 05:02:24.008801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.599 [2024-12-09 05:02:24.023366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.599 [2024-12-09 05:02:24.023388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.599 [2024-12-09 05:02:24.037057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.599 [2024-12-09 05:02:24.037076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.599 [2024-12-09 05:02:24.050846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.599 [2024-12-09 05:02:24.050864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.599 [2024-12-09 05:02:24.064941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.599 [2024-12-09 05:02:24.064959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.599 [2024-12-09 05:02:24.078669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.599 [2024-12-09 05:02:24.078688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.599 [2024-12-09 05:02:24.092696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.599 [2024-12-09 05:02:24.092727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.599 [2024-12-09 05:02:24.106856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.599 [2024-12-09 05:02:24.106874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.599 [2024-12-09 05:02:24.117858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.599 [2024-12-09 05:02:24.117877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.599 [2024-12-09 05:02:24.132720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.599 [2024-12-09 05:02:24.132738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.599 [2024-12-09 05:02:24.146661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.599 [2024-12-09 05:02:24.146680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.599 [2024-12-09 05:02:24.160763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.599 [2024-12-09 05:02:24.160782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.599 [2024-12-09 05:02:24.174977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.599 [2024-12-09 05:02:24.174994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.599 [2024-12-09 05:02:24.185861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.599 [2024-12-09 05:02:24.185881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.599 [2024-12-09 05:02:24.200595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.599 [2024-12-09 05:02:24.200614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.599 [2024-12-09 05:02:24.211142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.599 [2024-12-09 05:02:24.211160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.599 [2024-12-09 05:02:24.225608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.599 [2024-12-09 05:02:24.225626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.599 [2024-12-09 05:02:24.239619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.599 [2024-12-09 05:02:24.239638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.857 [2024-12-09 05:02:24.250622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.857 [2024-12-09 05:02:24.250640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.857 [2024-12-09 05:02:24.265156] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.857 [2024-12-09 05:02:24.265175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.857 [2024-12-09 05:02:24.278956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.857 [2024-12-09 05:02:24.278980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.857 [2024-12-09 05:02:24.293085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.857 [2024-12-09 05:02:24.293105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.857 [2024-12-09 05:02:24.307298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.857 [2024-12-09 05:02:24.307317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.857 [2024-12-09 05:02:24.318498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.857 [2024-12-09 05:02:24.318516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.857 [2024-12-09 05:02:24.333384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.857 [2024-12-09 05:02:24.333403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.857 [2024-12-09 05:02:24.344213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.857 [2024-12-09 05:02:24.344231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.857 [2024-12-09 05:02:24.358756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.857 [2024-12-09 05:02:24.358774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.857 [2024-12-09 05:02:24.372754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.857 [2024-12-09 05:02:24.372772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.857 [2024-12-09 05:02:24.386841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.857 [2024-12-09 05:02:24.386860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.857 [2024-12-09 05:02:24.400894] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.857 [2024-12-09 05:02:24.400913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.857 [2024-12-09 05:02:24.414926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.857 [2024-12-09 05:02:24.414944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.857 [2024-12-09 05:02:24.429081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.857 [2024-12-09 05:02:24.429100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.857 [2024-12-09 05:02:24.443155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.857 [2024-12-09 05:02:24.443174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.857 [2024-12-09 05:02:24.457416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.857 [2024-12-09 05:02:24.457434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.857 [2024-12-09 05:02:24.468446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.857 [2024-12-09 05:02:24.468464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.857 [2024-12-09 05:02:24.483079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.857 [2024-12-09 05:02:24.483097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.857 [2024-12-09 05:02:24.494032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.857 [2024-12-09 05:02:24.494051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.116 [2024-12-09 05:02:24.508670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.116 [2024-12-09 05:02:24.508689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.116 [2024-12-09 05:02:24.522149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.116 [2024-12-09 05:02:24.522167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.116 [2024-12-09 05:02:24.536088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.116 [2024-12-09 05:02:24.536112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.116 [2024-12-09 05:02:24.549984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.116 [2024-12-09 05:02:24.550008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.116 [2024-12-09 05:02:24.561225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.116 [2024-12-09 05:02:24.561243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.116 [2024-12-09 05:02:24.575833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.116 [2024-12-09 05:02:24.575852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.116 [2024-12-09 05:02:24.587194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.116 [2024-12-09 05:02:24.587212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.116 [2024-12-09 05:02:24.601926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.116 [2024-12-09 05:02:24.601944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.116 [2024-12-09 05:02:24.616354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.116 [2024-12-09 05:02:24.616372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.116 [2024-12-09 05:02:24.627496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.116 [2024-12-09 05:02:24.627514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.116 [2024-12-09 05:02:24.642296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.116 [2024-12-09 05:02:24.642315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.116 [2024-12-09 05:02:24.653310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.116 [2024-12-09 05:02:24.653330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.116 [2024-12-09 05:02:24.668167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.116 [2024-12-09 05:02:24.668185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.116 [2024-12-09 05:02:24.681851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.116 [2024-12-09 05:02:24.681869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.116 [2024-12-09 05:02:24.696277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.116 [2024-12-09 05:02:24.696295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.116 [2024-12-09 05:02:24.712367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.116 [2024-12-09 05:02:24.712385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.116 [2024-12-09 05:02:24.726482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.116 [2024-12-09 05:02:24.726500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.116 16522.33 IOPS, 129.08 MiB/s [2024-12-09T04:02:24.762Z] [2024-12-09 05:02:24.740508] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.116 [2024-12-09 05:02:24.740526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.116 [2024-12-09 05:02:24.754032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.116 [2024-12-09 05:02:24.754050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.375 [2024-12-09 05:02:24.768593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.375 [2024-12-09 05:02:24.768612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.375 [2024-12-09 05:02:24.782853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.375 [2024-12-09 05:02:24.782871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.375 [2024-12-09 05:02:24.797194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.375 [2024-12-09 05:02:24.797212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.375 [2024-12-09 05:02:24.811025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.375 [2024-12-09 05:02:24.811043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.375 [2024-12-09 05:02:24.824952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.375 [2024-12-09 05:02:24.824970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.375 [2024-12-09 05:02:24.838480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.375 [2024-12-09 05:02:24.838498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.375 [2024-12-09 05:02:24.852565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.375 [2024-12-09 05:02:24.852583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.375 [2024-12-09 05:02:24.867347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.375 [2024-12-09 05:02:24.867365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.375 [2024-12-09 05:02:24.882551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.375 [2024-12-09 05:02:24.882570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.375 [2024-12-09 05:02:24.897147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.375 [2024-12-09 05:02:24.897168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.375 [2024-12-09 05:02:24.910989] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.375 [2024-12-09 05:02:24.911014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.375 [2024-12-09 05:02:24.924881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.375 [2024-12-09 05:02:24.924903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.375 [2024-12-09 05:02:24.939214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.375 [2024-12-09 05:02:24.939234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.375 [2024-12-09 05:02:24.953287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.375 [2024-12-09 05:02:24.953305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.375 [2024-12-09 05:02:24.966975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.375 [2024-12-09 05:02:24.966993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.375 [2024-12-09 05:02:24.980818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.375 [2024-12-09 05:02:24.980837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.375 [2024-12-09 05:02:24.995883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.375 [2024-12-09 05:02:24.995906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.375 [2024-12-09 05:02:25.011026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.375 [2024-12-09 05:02:25.011045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.632 [2024-12-09 05:02:25.025329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.632 [2024-12-09 05:02:25.025349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.632 [2024-12-09 05:02:25.039281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.632 [2024-12-09 05:02:25.039300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.632 [2024-12-09 05:02:25.053357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.632 [2024-12-09 05:02:25.053377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.632 [2024-12-09 05:02:25.067519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.632 [2024-12-09 05:02:25.067537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.632 [2024-12-09 05:02:25.078577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.632 [2024-12-09 05:02:25.078596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.632 [2024-12-09 05:02:25.093231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.632 [2024-12-09 05:02:25.093252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.632 [2024-12-09 05:02:25.103956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.632 [2024-12-09 05:02:25.103975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.632 [2024-12-09 05:02:25.118310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.632 [2024-12-09 05:02:25.118340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.632 [2024-12-09 05:02:25.132410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.632 [2024-12-09 05:02:25.132429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.632 [2024-12-09 05:02:25.145991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.632 [2024-12-09 05:02:25.146016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.632 [2024-12-09 05:02:25.160084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.632 [2024-12-09 05:02:25.160103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.632 [2024-12-09 05:02:25.174222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.632 [2024-12-09 05:02:25.174242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.632 [2024-12-09 05:02:25.188356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.632 [2024-12-09 05:02:25.188375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.632 [2024-12-09 05:02:25.199372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.632 [2024-12-09 05:02:25.199393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.632 [2024-12-09 05:02:25.214025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.632 [2024-12-09 05:02:25.214045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.632 [2024-12-09 05:02:25.227576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.632 [2024-12-09 05:02:25.227596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.632 [2024-12-09 05:02:25.241534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.632 [2024-12-09 05:02:25.241554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.632 [2024-12-09 05:02:25.255850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.632 [2024-12-09 05:02:25.255869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.632 [2024-12-09 05:02:25.269809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.632 [2024-12-09 05:02:25.269827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.889 [2024-12-09 05:02:25.284138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.889 [2024-12-09 05:02:25.284157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.889 [2024-12-09 05:02:25.294937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.889 [2024-12-09 05:02:25.294956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.889 [2024-12-09 05:02:25.308818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.889 [2024-12-09 05:02:25.308837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.889 [2024-12-09 05:02:25.322770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.889 [2024-12-09 05:02:25.322787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.889 [2024-12-09 05:02:25.336491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.889 [2024-12-09 05:02:25.336509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.889 [2024-12-09 05:02:25.350480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.889 [2024-12-09 05:02:25.350498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.889 [2024-12-09 05:02:25.364740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.889 [2024-12-09 05:02:25.364758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.889 [2024-12-09 05:02:25.378534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.889 [2024-12-09 05:02:25.378552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.889 [2024-12-09 05:02:25.392388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.889 [2024-12-09 05:02:25.392406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.889 [2024-12-09 05:02:25.406566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.889 [2024-12-09 05:02:25.406585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.889 [2024-12-09 05:02:25.420504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.889 [2024-12-09 05:02:25.420523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.889 [2024-12-09 05:02:25.434688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.889 [2024-12-09 05:02:25.434706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.889 [2024-12-09 05:02:25.448842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.889 [2024-12-09 05:02:25.448860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.889 [2024-12-09 05:02:25.463229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.889 [2024-12-09 05:02:25.463248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.889 [2024-12-09 05:02:25.474046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.889 [2024-12-09 05:02:25.474064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.889 [2024-12-09 05:02:25.488867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.889 [2024-12-09 05:02:25.488885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.889 [2024-12-09 05:02:25.500195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.889 [2024-12-09 05:02:25.500214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.889 [2024-12-09 05:02:25.515026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.889 [2024-12-09 05:02:25.515044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.889 [2024-12-09 05:02:25.526172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.889 [2024-12-09 05:02:25.526190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.147 [2024-12-09 05:02:25.540789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.147 [2024-12-09 05:02:25.540808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.147 [2024-12-09 05:02:25.551581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.147 [2024-12-09 05:02:25.551600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.147 [2024-12-09 05:02:25.565881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.147 [2024-12-09 05:02:25.565904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.147 [2024-12-09 05:02:25.580175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.147 [2024-12-09 05:02:25.580194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.147 [2024-12-09 05:02:25.594296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.147 [2024-12-09 05:02:25.594315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.147 [2024-12-09 05:02:25.608354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.147 [2024-12-09 05:02:25.608373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.147 [2024-12-09 05:02:25.621927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.147 [2024-12-09 05:02:25.621945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.147 [2024-12-09 05:02:25.636442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.147 [2024-12-09 05:02:25.636460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.147 [2024-12-09 05:02:25.647566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.147 [2024-12-09 05:02:25.647585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.147 [2024-12-09 05:02:25.661839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.147 [2024-12-09 05:02:25.661859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.147 [2024-12-09 05:02:25.675881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.147 [2024-12-09 05:02:25.675900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.147 [2024-12-09 05:02:25.689927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.147 [2024-12-09 05:02:25.689945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.147 [2024-12-09 05:02:25.703859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.147 [2024-12-09 05:02:25.703878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.147 [2024-12-09 05:02:25.718184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.147 [2024-12-09 05:02:25.718202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.147 [2024-12-09 05:02:25.732388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.147 [2024-12-09 05:02:25.732406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.147 16527.25 IOPS, 129.12 MiB/s [2024-12-09T04:02:25.793Z] [2024-12-09 05:02:25.746792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.147 [2024-12-09 05:02:25.746811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.147 [2024-12-09 05:02:25.761005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.147 [2024-12-09 05:02:25.761023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.147 [2024-12-09 05:02:25.776116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.147 [2024-12-09 05:02:25.776134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.147 [2024-12-09 05:02:25.790203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.147 [2024-12-09 05:02:25.790222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.405 [2024-12-09 05:02:25.804229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.405 [2024-12-09 05:02:25.804247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.405 [2024-12-09 05:02:25.818423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.405 [2024-12-09 05:02:25.818442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.405 [2024-12-09 05:02:25.832483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.405 [2024-12-09 05:02:25.832506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.405 [2024-12-09 05:02:25.846351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.405 [2024-12-09 05:02:25.846370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.405 [2024-12-09 05:02:25.860274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.405 [2024-12-09 05:02:25.860292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.405 [2024-12-09 05:02:25.873862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.405 [2024-12-09 05:02:25.873880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.405 [2024-12-09 05:02:25.887823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.405 [2024-12-09 05:02:25.887841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.405 [2024-12-09 05:02:25.901961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.405 [2024-12-09 05:02:25.901980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.405 [2024-12-09 05:02:25.916007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.405 [2024-12-09 05:02:25.916026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.405 [2024-12-09 05:02:25.930326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.405 [2024-12-09 05:02:25.930345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.405 [2024-12-09 05:02:25.944528] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.405 [2024-12-09 05:02:25.944546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.405 [2024-12-09 05:02:25.958635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.405 [2024-12-09 05:02:25.958653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.405 [2024-12-09 05:02:25.972854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.405 [2024-12-09 05:02:25.972873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.405 [2024-12-09 05:02:25.986624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.405 [2024-12-09 05:02:25.986643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.405 [2024-12-09 05:02:26.000885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.405 [2024-12-09 05:02:26.000904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.405 [2024-12-09 05:02:26.014951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.405 [2024-12-09 05:02:26.014970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.405 [2024-12-09 05:02:26.028328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.405 [2024-12-09 05:02:26.028346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.405 [2024-12-09 05:02:26.042393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.405 [2024-12-09 05:02:26.042410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.662 [2024-12-09 05:02:26.057188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.662 [2024-12-09 05:02:26.057212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.662 [2024-12-09 05:02:26.072761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.662 [2024-12-09 05:02:26.072779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.662 [2024-12-09 05:02:26.086829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.662 [2024-12-09 05:02:26.086847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.662 [2024-12-09 05:02:26.100930] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.662 [2024-12-09 05:02:26.100953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.662 [2024-12-09 05:02:26.114905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.662 [2024-12-09 05:02:26.114923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.662 [2024-12-09 05:02:26.128872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.662 [2024-12-09 05:02:26.128890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.662 [2024-12-09 05:02:26.142852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.662 [2024-12-09 05:02:26.142870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.662 [2024-12-09 05:02:26.156972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.662 [2024-12-09 05:02:26.156990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.662 [2024-12-09 05:02:26.170754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.662 [2024-12-09 05:02:26.170772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.662 [2024-12-09 05:02:26.184921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.662 [2024-12-09 05:02:26.184939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.662 [2024-12-09 05:02:26.198827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.662 [2024-12-09 05:02:26.198846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.662 [2024-12-09 05:02:26.213093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.662 [2024-12-09 05:02:26.213112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.662 [2024-12-09 05:02:26.227217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.662 [2024-12-09 05:02:26.227235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.662 [2024-12-09 05:02:26.241494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.662 [2024-12-09 05:02:26.241512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.662 [2024-12-09 05:02:26.252651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.662 [2024-12-09 05:02:26.252669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.662 [2024-12-09 05:02:26.267402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.662 [2024-12-09 05:02:26.267422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.662 [2024-12-09 05:02:26.281602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.662 [2024-12-09 05:02:26.281620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.662 [2024-12-09 05:02:26.297494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.662 [2024-12-09 05:02:26.297514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.920 [2024-12-09 05:02:26.311902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.920 [2024-12-09 05:02:26.311922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.920 [2024-12-09 05:02:26.326089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.920 [2024-12-09 05:02:26.326110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.920 [2024-12-09 05:02:26.336842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.920 [2024-12-09 05:02:26.336861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.920 [2024-12-09 05:02:26.351527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.920 [2024-12-09 05:02:26.351546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.920 [2024-12-09 05:02:26.365266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.920 [2024-12-09 05:02:26.365286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.920 [2024-12-09 05:02:26.379874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.920 [2024-12-09 05:02:26.379891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.920 [2024-12-09 05:02:26.396049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.920 [2024-12-09 05:02:26.396068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.920 [2024-12-09 05:02:26.410240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.920 [2024-12-09 05:02:26.410259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.920 [2024-12-09 05:02:26.424474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.920 [2024-12-09 05:02:26.424493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.920 [2024-12-09 05:02:26.438248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.920 [2024-12-09 05:02:26.438267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.920 [2024-12-09 05:02:26.452442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.920 [2024-12-09 05:02:26.452461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.920 [2024-12-09 05:02:26.466897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.920 [2024-12-09 05:02:26.466916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.920 [2024-12-09 05:02:26.478111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.920 [2024-12-09 05:02:26.478130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.921 [2024-12-09 05:02:26.492654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.921 [2024-12-09 05:02:26.492673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.921 [2024-12-09 05:02:26.506691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.921 [2024-12-09 05:02:26.506710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.921 [2024-12-09 05:02:26.520714] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.921 [2024-12-09 05:02:26.520734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.921 [2024-12-09 05:02:26.534640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.921 [2024-12-09 05:02:26.534660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.921 [2024-12-09 05:02:26.548347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.921 [2024-12-09 05:02:26.548368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.921 [2024-12-09 05:02:26.562698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.921 [2024-12-09 05:02:26.562717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.179 [2024-12-09 05:02:26.576442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.179 [2024-12-09 05:02:26.576466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.179 [2024-12-09 05:02:26.590445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.179 [2024-12-09 05:02:26.590464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.179 [2024-12-09 05:02:26.604727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.179 [2024-12-09 05:02:26.604745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.179 [2024-12-09 05:02:26.615721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.179 [2024-12-09 05:02:26.615740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.179 [2024-12-09 05:02:26.630544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.179 [2024-12-09 05:02:26.630563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.179 [2024-12-09 05:02:26.641407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.180 [2024-12-09 05:02:26.641426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.180 [2024-12-09 05:02:26.655606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.180 [2024-12-09 05:02:26.655625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.180 [2024-12-09 05:02:26.669590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.180 [2024-12-09 05:02:26.669608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.180 [2024-12-09 05:02:26.683712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.180 [2024-12-09 05:02:26.683733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.180 [2024-12-09 05:02:26.697771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.180 [2024-12-09 05:02:26.697792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.180 [2024-12-09 05:02:26.711634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.180 [2024-12-09 05:02:26.711653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.180 [2024-12-09 05:02:26.725979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.180 [2024-12-09 05:02:26.726004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.180 [2024-12-09 05:02:26.739242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.180 [2024-12-09 05:02:26.739261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.180 16533.40 IOPS, 129.17 MiB/s 00:07:50.180 Latency(us) 00:07:50.180 [2024-12-09T04:02:26.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.180 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:07:50.180 Nvme1n1 : 5.01 16535.16 129.18 0.00 0.00 7734.05 3675.71 14474.91 00:07:50.180 [2024-12-09T04:02:26.826Z] =================================================================================================================== 00:07:50.180 [2024-12-09T04:02:26.826Z] Total : 16535.16 129.18 0.00 0.00 7734.05 3675.71 14474.91 00:07:50.180 [2024-12-09 05:02:26.749686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.180 [2024-12-09 05:02:26.749703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.180 [2024-12-09 05:02:26.761714] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.180 [2024-12-09 05:02:26.761729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.180 [2024-12-09 05:02:26.773757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.180 [2024-12-09 05:02:26.773775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.180 [2024-12-09 05:02:26.785787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.180 [2024-12-09 05:02:26.785805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.180 [2024-12-09 05:02:26.797814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.180 [2024-12-09 05:02:26.797830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.180 [2024-12-09 05:02:26.809843] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.180 [2024-12-09 05:02:26.809856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.180 [2024-12-09 05:02:26.821873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.180 [2024-12-09 05:02:26.821893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.437 [2024-12-09 05:02:26.833903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.437 [2024-12-09 05:02:26.833916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.437 [2024-12-09 05:02:26.845934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.437 [2024-12-09 05:02:26.845947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.437 [2024-12-09 05:02:26.857963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.437 [2024-12-09 05:02:26.857974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.437 [2024-12-09 05:02:26.869994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.437 [2024-12-09 05:02:26.870006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.437 [2024-12-09 05:02:26.882034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.437 [2024-12-09 05:02:26.882046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.437 [2024-12-09 05:02:26.894061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.437 [2024-12-09 05:02:26.894072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.437 [2024-12-09 05:02:26.906093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.437 [2024-12-09 05:02:26.906102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.437 [2024-12-09 05:02:26.918124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.437 [2024-12-09 05:02:26.918133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.437 [2024-12-09 05:02:26.930156] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.437 [2024-12-09 05:02:26.930165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.437 [2024-12-09 05:02:26.942192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.437 [2024-12-09 05:02:26.942202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3456891) - No such process 00:07:50.437 05:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3456891 00:07:50.437 05:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.437 05:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.437 05:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:50.437 05:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.437 05:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:50.437 05:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.437 05:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:50.437 delay0 00:07:50.437 05:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.437 05:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:07:50.437 05:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.437 05:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:50.437 05:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.437 05:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:07:50.693 [2024-12-09 05:02:27.085676] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:57.257 Initializing NVMe Controllers 00:07:57.257 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:57.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:57.257 Initialization complete. Launching workers. 00:07:57.257 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 98 00:07:57.257 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 375, failed to submit 43 00:07:57.257 success 206, unsuccessful 169, failed 0 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:57.257 rmmod nvme_tcp 00:07:57.257 rmmod nvme_fabrics 00:07:57.257 rmmod nvme_keyring 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3455028 ']' 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3455028 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3455028 ']' 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3455028 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3455028 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3455028' 00:07:57.257 killing process with pid 3455028 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3455028 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3455028 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.257 05:02:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.161 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:59.161 00:07:59.161 real 0m31.419s 00:07:59.161 user 0m42.399s 00:07:59.161 sys 0m10.875s 00:07:59.161 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.161 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:59.161 ************************************ 00:07:59.161 END TEST nvmf_zcopy 00:07:59.161 ************************************ 00:07:59.161 05:02:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:07:59.161 05:02:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:59.161 05:02:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.161 05:02:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:59.161 ************************************ 00:07:59.161 START TEST nvmf_nmic 00:07:59.161 ************************************ 00:07:59.161 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:07:59.161 * Looking for test storage... 00:07:59.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:59.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.420 --rc genhtml_branch_coverage=1 00:07:59.420 --rc genhtml_function_coverage=1 00:07:59.420 --rc genhtml_legend=1 00:07:59.420 --rc geninfo_all_blocks=1 00:07:59.420 --rc geninfo_unexecuted_blocks=1 00:07:59.420 00:07:59.420 ' 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:59.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.420 --rc genhtml_branch_coverage=1 00:07:59.420 --rc genhtml_function_coverage=1 00:07:59.420 --rc genhtml_legend=1 00:07:59.420 --rc geninfo_all_blocks=1 00:07:59.420 --rc geninfo_unexecuted_blocks=1 00:07:59.420 00:07:59.420 ' 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:59.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.420 --rc genhtml_branch_coverage=1 00:07:59.420 --rc genhtml_function_coverage=1 00:07:59.420 --rc genhtml_legend=1 00:07:59.420 --rc geninfo_all_blocks=1 00:07:59.420 --rc geninfo_unexecuted_blocks=1 00:07:59.420 00:07:59.420 ' 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:59.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.420 --rc genhtml_branch_coverage=1 00:07:59.420 --rc genhtml_function_coverage=1 00:07:59.420 --rc genhtml_legend=1 00:07:59.420 --rc geninfo_all_blocks=1 00:07:59.420 --rc geninfo_unexecuted_blocks=1 00:07:59.420 00:07:59.420 ' 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.420 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:07:59.421 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:04.702 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:04.702 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:04.702 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:04.702 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:04.702 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:04.702 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:04.702 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:04.702 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:04.702 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:04.702 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:04.702 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:04.702 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:04.702 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:04.702 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:04.703 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:04.703 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:04.703 Found net devices under 0000:86:00.0: cvl_0_0 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:04.703 Found net devices under 0000:86:00.1: cvl_0_1 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:04.703 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:04.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:08:04.703 00:08:04.703 --- 10.0.0.2 ping statistics --- 00:08:04.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.704 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:08:04.704 00:08:04.704 --- 10.0.0.1 ping statistics --- 00:08:04.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.704 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3462266 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3462266 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3462266 ']' 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.704 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:04.962 [2024-12-09 05:02:41.382574] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:08:04.962 [2024-12-09 05:02:41.382626] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.962 [2024-12-09 05:02:41.453781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.963 [2024-12-09 05:02:41.497852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.963 [2024-12-09 05:02:41.497893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.963 [2024-12-09 05:02:41.497901] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.963 [2024-12-09 05:02:41.497907] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.963 [2024-12-09 05:02:41.497911] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.963 [2024-12-09 05:02:41.499461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.963 [2024-12-09 05:02:41.499563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.963 [2024-12-09 05:02:41.499663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.963 [2024-12-09 05:02:41.499664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.963 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.963 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:04.963 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:04.963 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:04.963 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:05.222 [2024-12-09 05:02:41.646679] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:05.222 Malloc0 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:05.222 [2024-12-09 05:02:41.713166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:05.222 test case1: single bdev can't be used in multiple subsystems 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:05.222 [2024-12-09 05:02:41.741046] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:05.222 [2024-12-09 05:02:41.741066] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:05.222 [2024-12-09 05:02:41.741074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.222 request: 00:08:05.222 { 00:08:05.222 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:05.222 "namespace": { 00:08:05.222 "bdev_name": "Malloc0", 00:08:05.222 "no_auto_visible": false, 00:08:05.222 "hide_metadata": false 00:08:05.222 }, 00:08:05.222 "method": "nvmf_subsystem_add_ns", 00:08:05.222 "req_id": 1 00:08:05.222 } 00:08:05.222 Got JSON-RPC error response 00:08:05.222 response: 00:08:05.222 { 00:08:05.222 "code": -32602, 00:08:05.222 "message": "Invalid parameters" 00:08:05.222 } 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:05.222 Adding namespace failed - expected result. 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:05.222 test case2: host connect to nvmf target in multiple paths 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:05.222 [2024-12-09 05:02:41.753189] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:05.222 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.223 05:02:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:06.597 05:02:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:07.532 05:02:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:07.532 05:02:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:07.532 05:02:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:07.532 05:02:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:07.532 05:02:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:09.434 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:09.434 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:09.434 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:09.434 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:09.434 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:09.434 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:09.434 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:09.434 [global] 00:08:09.434 thread=1 00:08:09.434 invalidate=1 00:08:09.434 rw=write 00:08:09.434 time_based=1 00:08:09.434 runtime=1 00:08:09.434 ioengine=libaio 00:08:09.434 direct=1 00:08:09.434 bs=4096 00:08:09.434 iodepth=1 00:08:09.434 norandommap=0 00:08:09.434 numjobs=1 00:08:09.434 00:08:09.434 verify_dump=1 00:08:09.434 verify_backlog=512 00:08:09.434 verify_state_save=0 00:08:09.434 do_verify=1 00:08:09.434 verify=crc32c-intel 00:08:09.434 [job0] 00:08:09.434 filename=/dev/nvme0n1 00:08:09.434 Could not set queue depth (nvme0n1) 00:08:09.693 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:09.693 fio-3.35 00:08:09.693 Starting 1 thread 00:08:11.090 00:08:11.090 job0: (groupid=0, jobs=1): err= 0: pid=3463342: Mon Dec 9 05:02:47 2024 00:08:11.090 read: IOPS=398, BW=1595KiB/s (1634kB/s)(1632KiB/1023msec) 00:08:11.090 slat (nsec): min=6341, max=28016, avg=8191.56, stdev=3575.97 00:08:11.090 clat (usec): min=216, max=42052, avg=2274.08, stdev=8921.98 00:08:11.090 lat (usec): min=223, max=42074, avg=2282.27, stdev=8925.01 00:08:11.090 clat percentiles (usec): 00:08:11.090 | 1.00th=[ 223], 5.00th=[ 227], 10.00th=[ 227], 20.00th=[ 231], 00:08:11.090 | 30.00th=[ 233], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 245], 00:08:11.090 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 1369], 00:08:11.090 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:11.090 | 99.99th=[42206] 00:08:11.090 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:08:11.090 slat (nsec): min=8100, max=39044, avg=10192.15, stdev=1841.35 00:08:11.090 clat (usec): min=129, max=372, avg=164.56, stdev=23.21 00:08:11.090 lat (usec): min=138, max=411, avg=174.75, stdev=23.79 00:08:11.090 clat percentiles (usec): 00:08:11.090 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 135], 20.00th=[ 139], 00:08:11.090 | 30.00th=[ 143], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:08:11.090 | 70.00th=[ 180], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 192], 00:08:11.090 | 99.00th=[ 200], 99.50th=[ 212], 99.90th=[ 375], 99.95th=[ 375], 00:08:11.090 | 99.99th=[ 375] 00:08:11.090 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:08:11.090 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:11.090 lat (usec) : 250=84.02%, 500=13.70% 00:08:11.090 lat (msec) : 2=0.11%, 50=2.17% 00:08:11.090 cpu : usr=0.78%, sys=0.49%, ctx=920, majf=0, minf=1 00:08:11.090 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:11.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:11.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:11.090 issued rwts: total=408,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:11.090 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:11.090 00:08:11.090 Run status group 0 (all jobs): 00:08:11.090 READ: bw=1595KiB/s (1634kB/s), 1595KiB/s-1595KiB/s (1634kB/s-1634kB/s), io=1632KiB (1671kB), run=1023-1023msec 00:08:11.090 WRITE: bw=2002KiB/s (2050kB/s), 2002KiB/s-2002KiB/s (2050kB/s-2050kB/s), io=2048KiB (2097kB), run=1023-1023msec 00:08:11.090 00:08:11.090 Disk stats (read/write): 00:08:11.090 nvme0n1: ios=454/512, merge=0/0, ticks=1040/77, in_queue=1117, util=95.69% 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:11.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:11.090 rmmod nvme_tcp 00:08:11.090 rmmod nvme_fabrics 00:08:11.090 rmmod nvme_keyring 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3462266 ']' 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3462266 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3462266 ']' 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3462266 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3462266 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3462266' 00:08:11.090 killing process with pid 3462266 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3462266 00:08:11.090 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3462266 00:08:11.350 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:11.350 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:11.350 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:11.350 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:11.350 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:11.350 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:11.350 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:11.350 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:11.350 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:11.350 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.350 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.350 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:13.884 00:08:13.884 real 0m14.308s 00:08:13.884 user 0m32.635s 00:08:13.884 sys 0m4.807s 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:13.884 ************************************ 00:08:13.884 END TEST nvmf_nmic 00:08:13.884 ************************************ 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:13.884 ************************************ 00:08:13.884 START TEST nvmf_fio_target 00:08:13.884 ************************************ 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:13.884 * Looking for test storage... 00:08:13.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:13.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.884 --rc genhtml_branch_coverage=1 00:08:13.884 --rc genhtml_function_coverage=1 00:08:13.884 --rc genhtml_legend=1 00:08:13.884 --rc geninfo_all_blocks=1 00:08:13.884 --rc geninfo_unexecuted_blocks=1 00:08:13.884 00:08:13.884 ' 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:13.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.884 --rc genhtml_branch_coverage=1 00:08:13.884 --rc genhtml_function_coverage=1 00:08:13.884 --rc genhtml_legend=1 00:08:13.884 --rc geninfo_all_blocks=1 00:08:13.884 --rc geninfo_unexecuted_blocks=1 00:08:13.884 00:08:13.884 ' 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:13.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.884 --rc genhtml_branch_coverage=1 00:08:13.884 --rc genhtml_function_coverage=1 00:08:13.884 --rc genhtml_legend=1 00:08:13.884 --rc geninfo_all_blocks=1 00:08:13.884 --rc geninfo_unexecuted_blocks=1 00:08:13.884 00:08:13.884 ' 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:13.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.884 --rc genhtml_branch_coverage=1 00:08:13.884 --rc genhtml_function_coverage=1 00:08:13.884 --rc genhtml_legend=1 00:08:13.884 --rc geninfo_all_blocks=1 00:08:13.884 --rc geninfo_unexecuted_blocks=1 00:08:13.884 00:08:13.884 ' 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:13.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:13.884 05:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:19.161 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:19.161 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:19.162 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:19.162 Found net devices under 0000:86:00.0: cvl_0_0 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:19.162 Found net devices under 0000:86:00.1: cvl_0_1 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.162 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.422 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.422 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.422 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:19.422 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.422 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.422 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.422 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:19.422 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:19.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:08:19.422 00:08:19.422 --- 10.0.0.2 ping statistics --- 00:08:19.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.422 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:08:19.422 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:08:19.422 00:08:19.422 --- 10.0.0.1 ping statistics --- 00:08:19.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.422 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:08:19.422 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.422 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:19.422 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:19.422 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.422 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:19.422 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:19.422 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.422 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:19.422 05:02:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:19.422 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:19.422 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:19.422 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:19.422 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:19.422 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3467104 00:08:19.422 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:19.422 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3467104 00:08:19.422 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3467104 ']' 00:08:19.422 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.422 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.422 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.422 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.422 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:19.682 [2024-12-09 05:02:56.092970] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:08:19.682 [2024-12-09 05:02:56.093039] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.682 [2024-12-09 05:02:56.161369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.682 [2024-12-09 05:02:56.203635] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.682 [2024-12-09 05:02:56.203675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.682 [2024-12-09 05:02:56.203682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.682 [2024-12-09 05:02:56.203690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.682 [2024-12-09 05:02:56.203695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.682 [2024-12-09 05:02:56.205118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.682 [2024-12-09 05:02:56.205137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.682 [2024-12-09 05:02:56.205228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.682 [2024-12-09 05:02:56.205226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.682 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.682 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:19.682 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:19.682 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:19.682 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:19.942 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.942 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:19.942 [2024-12-09 05:02:56.517172] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.942 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:20.201 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:20.201 05:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:20.460 05:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:20.460 05:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:20.720 05:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:20.720 05:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:20.979 05:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:20.979 05:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:21.237 05:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:21.237 05:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:21.237 05:02:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:21.557 05:02:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:21.557 05:02:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:21.844 05:02:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:21.844 05:02:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:21.844 05:02:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:22.128 05:02:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:22.128 05:02:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:22.386 05:02:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:22.386 05:02:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:22.644 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.644 [2024-12-09 05:02:59.238444] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.644 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:22.903 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:23.161 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:24.535 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:24.535 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:24.535 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:24.535 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:24.535 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:24.535 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:08:26.436 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:26.436 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:26.436 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:26.436 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:08:26.436 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:26.436 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:08:26.436 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:26.436 [global] 00:08:26.436 thread=1 00:08:26.436 invalidate=1 00:08:26.436 rw=write 00:08:26.436 time_based=1 00:08:26.436 runtime=1 00:08:26.436 ioengine=libaio 00:08:26.436 direct=1 00:08:26.436 bs=4096 00:08:26.436 iodepth=1 00:08:26.436 norandommap=0 00:08:26.436 numjobs=1 00:08:26.436 00:08:26.436 verify_dump=1 00:08:26.436 verify_backlog=512 00:08:26.436 verify_state_save=0 00:08:26.436 do_verify=1 00:08:26.436 verify=crc32c-intel 00:08:26.436 [job0] 00:08:26.436 filename=/dev/nvme0n1 00:08:26.436 [job1] 00:08:26.436 filename=/dev/nvme0n2 00:08:26.436 [job2] 00:08:26.436 filename=/dev/nvme0n3 00:08:26.436 [job3] 00:08:26.436 filename=/dev/nvme0n4 00:08:26.436 Could not set queue depth (nvme0n1) 00:08:26.436 Could not set queue depth (nvme0n2) 00:08:26.436 Could not set queue depth (nvme0n3) 00:08:26.436 Could not set queue depth (nvme0n4) 00:08:26.703 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:26.703 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:26.703 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:26.703 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:26.703 fio-3.35 00:08:26.703 Starting 4 threads 00:08:28.079 00:08:28.079 job0: (groupid=0, jobs=1): err= 0: pid=3468583: Mon Dec 9 05:03:04 2024 00:08:28.079 read: IOPS=22, BW=90.8KiB/s (93.0kB/s)(92.0KiB/1013msec) 00:08:28.079 slat (nsec): min=9486, max=36082, avg=21064.26, stdev=5730.35 00:08:28.079 clat (usec): min=388, max=41974, avg=39377.64, stdev=8509.10 00:08:28.079 lat (usec): min=412, max=41993, avg=39398.71, stdev=8508.40 00:08:28.079 clat percentiles (usec): 00:08:28.079 | 1.00th=[ 388], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:08:28.079 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:28.079 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:08:28.079 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:28.079 | 99.99th=[42206] 00:08:28.079 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:08:28.079 slat (nsec): min=8117, max=66674, avg=11844.70, stdev=6026.12 00:08:28.079 clat (usec): min=139, max=399, avg=193.73, stdev=20.34 00:08:28.079 lat (usec): min=165, max=438, avg=205.57, stdev=22.08 00:08:28.079 clat percentiles (usec): 00:08:28.079 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 180], 00:08:28.079 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:08:28.079 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 215], 95.00th=[ 223], 00:08:28.079 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 400], 99.95th=[ 400], 00:08:28.079 | 99.99th=[ 400] 00:08:28.079 bw ( KiB/s): min= 4096, max= 4096, per=17.83%, avg=4096.00, stdev= 0.00, samples=1 00:08:28.079 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:28.079 lat (usec) : 250=94.21%, 500=1.68% 00:08:28.079 lat (msec) : 50=4.11% 00:08:28.079 cpu : usr=0.20%, sys=0.59%, ctx=535, majf=0, minf=1 00:08:28.079 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:28.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.079 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:28.079 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:28.079 job1: (groupid=0, jobs=1): err= 0: pid=3468584: Mon Dec 9 05:03:04 2024 00:08:28.079 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:08:28.079 slat (nsec): min=6195, max=25760, avg=7192.28, stdev=772.91 00:08:28.079 clat (usec): min=223, max=474, avg=276.46, stdev=47.53 00:08:28.079 lat (usec): min=230, max=481, avg=283.65, stdev=47.60 00:08:28.079 clat percentiles (usec): 00:08:28.079 | 1.00th=[ 239], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 253], 00:08:28.079 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:08:28.079 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 433], 00:08:28.079 | 99.00th=[ 453], 99.50th=[ 457], 99.90th=[ 461], 99.95th=[ 469], 00:08:28.079 | 99.99th=[ 474] 00:08:28.079 write: IOPS=2232, BW=8931KiB/s (9145kB/s)(8940KiB/1001msec); 0 zone resets 00:08:28.079 slat (nsec): min=9432, max=41531, avg=10742.55, stdev=1581.27 00:08:28.079 clat (usec): min=132, max=293, avg=172.59, stdev=18.77 00:08:28.079 lat (usec): min=143, max=334, avg=183.33, stdev=19.05 00:08:28.079 clat percentiles (usec): 00:08:28.079 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:08:28.079 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:08:28.079 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 196], 95.00th=[ 208], 00:08:28.079 | 99.00th=[ 231], 99.50th=[ 269], 99.90th=[ 285], 99.95th=[ 289], 00:08:28.079 | 99.99th=[ 293] 00:08:28.079 bw ( KiB/s): min= 8616, max= 8616, per=37.50%, avg=8616.00, stdev= 0.00, samples=1 00:08:28.079 iops : min= 2154, max= 2154, avg=2154.00, stdev= 0.00, samples=1 00:08:28.079 lat (usec) : 250=58.65%, 500=41.35% 00:08:28.079 cpu : usr=1.60%, sys=4.50%, ctx=4285, majf=0, minf=1 00:08:28.079 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:28.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.079 issued rwts: total=2048,2235,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:28.079 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:28.079 job2: (groupid=0, jobs=1): err= 0: pid=3468585: Mon Dec 9 05:03:04 2024 00:08:28.079 read: IOPS=2012, BW=8052KiB/s (8245kB/s)(8060KiB/1001msec) 00:08:28.079 slat (nsec): min=6703, max=29278, avg=8146.13, stdev=1182.19 00:08:28.079 clat (usec): min=228, max=377, avg=284.79, stdev=17.95 00:08:28.079 lat (usec): min=236, max=385, avg=292.94, stdev=18.02 00:08:28.079 clat percentiles (usec): 00:08:28.079 | 1.00th=[ 255], 5.00th=[ 262], 10.00th=[ 265], 20.00th=[ 273], 00:08:28.079 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 281], 60.00th=[ 285], 00:08:28.079 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 318], 00:08:28.079 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 367], 99.95th=[ 367], 00:08:28.079 | 99.99th=[ 379] 00:08:28.079 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:28.079 slat (nsec): min=10105, max=43095, avg=11496.04, stdev=1804.17 00:08:28.079 clat (usec): min=143, max=918, avg=182.56, stdev=24.23 00:08:28.079 lat (usec): min=153, max=935, avg=194.05, stdev=24.53 00:08:28.079 clat percentiles (usec): 00:08:28.079 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:08:28.079 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 184], 00:08:28.079 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 206], 00:08:28.079 | 99.00th=[ 219], 99.50th=[ 229], 99.90th=[ 330], 99.95th=[ 693], 00:08:28.079 | 99.99th=[ 922] 00:08:28.079 bw ( KiB/s): min= 8192, max= 8192, per=35.65%, avg=8192.00, stdev= 0.00, samples=1 00:08:28.079 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:28.079 lat (usec) : 250=50.41%, 500=49.54%, 750=0.02%, 1000=0.02% 00:08:28.079 cpu : usr=4.50%, sys=5.10%, ctx=4063, majf=0, minf=1 00:08:28.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:28.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.080 issued rwts: total=2015,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:28.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:28.080 job3: (groupid=0, jobs=1): err= 0: pid=3468586: Mon Dec 9 05:03:04 2024 00:08:28.080 read: IOPS=723, BW=2893KiB/s (2963kB/s)(2896KiB/1001msec) 00:08:28.080 slat (nsec): min=6736, max=25417, avg=8792.16, stdev=2104.58 00:08:28.080 clat (usec): min=253, max=42035, avg=1075.32, stdev=5522.83 00:08:28.080 lat (usec): min=261, max=42047, avg=1084.12, stdev=5523.80 00:08:28.080 clat percentiles (usec): 00:08:28.080 | 1.00th=[ 262], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 289], 00:08:28.080 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 314], 00:08:28.080 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 334], 95.00th=[ 343], 00:08:28.080 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:28.080 | 99.99th=[42206] 00:08:28.080 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:08:28.080 slat (nsec): min=9639, max=41610, avg=12063.01, stdev=2107.48 00:08:28.080 clat (usec): min=152, max=1761, avg=193.79, stdev=53.59 00:08:28.080 lat (usec): min=164, max=1773, avg=205.85, stdev=53.79 00:08:28.080 clat percentiles (usec): 00:08:28.080 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:08:28.080 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 192], 00:08:28.080 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 217], 95.00th=[ 225], 00:08:28.080 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 318], 99.95th=[ 1762], 00:08:28.080 | 99.99th=[ 1762] 00:08:28.080 bw ( KiB/s): min= 8192, max= 8192, per=35.65%, avg=8192.00, stdev= 0.00, samples=1 00:08:28.080 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:28.080 lat (usec) : 250=57.09%, 500=42.05% 00:08:28.080 lat (msec) : 2=0.06%, 50=0.80% 00:08:28.080 cpu : usr=0.80%, sys=2.30%, ctx=1748, majf=0, minf=1 00:08:28.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:28.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.080 issued rwts: total=724,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:28.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:28.080 00:08:28.080 Run status group 0 (all jobs): 00:08:28.080 READ: bw=18.5MiB/s (19.4MB/s), 90.8KiB/s-8184KiB/s (93.0kB/s-8380kB/s), io=18.8MiB (19.7MB), run=1001-1013msec 00:08:28.080 WRITE: bw=22.4MiB/s (23.5MB/s), 2022KiB/s-8931KiB/s (2070kB/s-9145kB/s), io=22.7MiB (23.8MB), run=1001-1013msec 00:08:28.080 00:08:28.080 Disk stats (read/write): 00:08:28.080 nvme0n1: ios=69/512, merge=0/0, ticks=770/98, in_queue=868, util=86.67% 00:08:28.080 nvme0n2: ios=1694/2048, merge=0/0, ticks=1444/340, in_queue=1784, util=98.37% 00:08:28.080 nvme0n3: ios=1557/2015, merge=0/0, ticks=602/355, in_queue=957, util=91.33% 00:08:28.080 nvme0n4: ios=523/1024, merge=0/0, ticks=610/200, in_queue=810, util=89.67% 00:08:28.080 05:03:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:28.080 [global] 00:08:28.080 thread=1 00:08:28.080 invalidate=1 00:08:28.080 rw=randwrite 00:08:28.080 time_based=1 00:08:28.080 runtime=1 00:08:28.080 ioengine=libaio 00:08:28.080 direct=1 00:08:28.080 bs=4096 00:08:28.080 iodepth=1 00:08:28.080 norandommap=0 00:08:28.080 numjobs=1 00:08:28.080 00:08:28.080 verify_dump=1 00:08:28.080 verify_backlog=512 00:08:28.080 verify_state_save=0 00:08:28.080 do_verify=1 00:08:28.080 verify=crc32c-intel 00:08:28.080 [job0] 00:08:28.080 filename=/dev/nvme0n1 00:08:28.080 [job1] 00:08:28.080 filename=/dev/nvme0n2 00:08:28.080 [job2] 00:08:28.080 filename=/dev/nvme0n3 00:08:28.080 [job3] 00:08:28.080 filename=/dev/nvme0n4 00:08:28.080 Could not set queue depth (nvme0n1) 00:08:28.080 Could not set queue depth (nvme0n2) 00:08:28.080 Could not set queue depth (nvme0n3) 00:08:28.080 Could not set queue depth (nvme0n4) 00:08:28.339 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:28.339 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:28.339 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:28.339 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:28.339 fio-3.35 00:08:28.339 Starting 4 threads 00:08:29.733 00:08:29.733 job0: (groupid=0, jobs=1): err= 0: pid=3468961: Mon Dec 9 05:03:05 2024 00:08:29.733 read: IOPS=2000, BW=8000KiB/s (8192kB/s)(8008KiB/1001msec) 00:08:29.733 slat (nsec): min=6154, max=39633, avg=7950.12, stdev=1883.67 00:08:29.733 clat (usec): min=222, max=526, avg=283.93, stdev=49.70 00:08:29.733 lat (usec): min=230, max=535, avg=291.88, stdev=50.20 00:08:29.733 clat percentiles (usec): 00:08:29.733 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 258], 00:08:29.733 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:08:29.733 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 322], 95.00th=[ 433], 00:08:29.733 | 99.00th=[ 461], 99.50th=[ 469], 99.90th=[ 486], 99.95th=[ 490], 00:08:29.733 | 99.99th=[ 529] 00:08:29.733 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:29.733 slat (nsec): min=7672, max=48662, avg=11616.70, stdev=3553.02 00:08:29.733 clat (usec): min=129, max=448, avg=185.91, stdev=34.81 00:08:29.733 lat (usec): min=142, max=458, avg=197.53, stdev=35.65 00:08:29.733 clat percentiles (usec): 00:08:29.733 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 161], 00:08:29.733 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 184], 00:08:29.733 | 70.00th=[ 192], 80.00th=[ 204], 90.00th=[ 227], 95.00th=[ 273], 00:08:29.733 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 363], 99.95th=[ 408], 00:08:29.733 | 99.99th=[ 449] 00:08:29.733 bw ( KiB/s): min= 8192, max= 8192, per=34.23%, avg=8192.00, stdev= 0.00, samples=1 00:08:29.733 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:29.733 lat (usec) : 250=52.72%, 500=47.26%, 750=0.02% 00:08:29.733 cpu : usr=3.30%, sys=4.20%, ctx=4051, majf=0, minf=1 00:08:29.733 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:29.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.733 issued rwts: total=2002,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.733 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:29.733 job1: (groupid=0, jobs=1): err= 0: pid=3468962: Mon Dec 9 05:03:05 2024 00:08:29.733 read: IOPS=1621, BW=6485KiB/s (6641kB/s)(6660KiB/1027msec) 00:08:29.733 slat (nsec): min=6609, max=27490, avg=8683.47, stdev=1519.13 00:08:29.733 clat (usec): min=206, max=41451, avg=369.59, stdev=2062.47 00:08:29.733 lat (usec): min=213, max=41463, avg=378.28, stdev=2062.56 00:08:29.733 clat percentiles (usec): 00:08:29.733 | 1.00th=[ 221], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:08:29.733 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:08:29.733 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 285], 00:08:29.733 | 99.00th=[ 334], 99.50th=[ 490], 99.90th=[41157], 99.95th=[41681], 00:08:29.733 | 99.99th=[41681] 00:08:29.733 write: IOPS=1994, BW=7977KiB/s (8168kB/s)(8192KiB/1027msec); 0 zone resets 00:08:29.733 slat (nsec): min=9599, max=45631, avg=12280.19, stdev=2751.55 00:08:29.733 clat (usec): min=137, max=560, avg=176.00, stdev=17.96 00:08:29.733 lat (usec): min=147, max=575, avg=188.28, stdev=18.92 00:08:29.733 clat percentiles (usec): 00:08:29.733 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:08:29.733 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:08:29.733 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 202], 00:08:29.733 | 99.00th=[ 221], 99.50th=[ 249], 99.90th=[ 310], 99.95th=[ 343], 00:08:29.733 | 99.99th=[ 562] 00:08:29.733 bw ( KiB/s): min= 6704, max= 9680, per=34.23%, avg=8192.00, stdev=2104.35, samples=2 00:08:29.733 iops : min= 1676, max= 2420, avg=2048.00, stdev=526.09, samples=2 00:08:29.733 lat (usec) : 250=80.80%, 500=18.96%, 750=0.05% 00:08:29.733 lat (msec) : 10=0.03%, 20=0.05%, 50=0.11% 00:08:29.733 cpu : usr=3.12%, sys=5.65%, ctx=3715, majf=0, minf=1 00:08:29.733 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:29.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.733 issued rwts: total=1665,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.733 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:29.733 job2: (groupid=0, jobs=1): err= 0: pid=3468963: Mon Dec 9 05:03:05 2024 00:08:29.733 read: IOPS=1067, BW=4269KiB/s (4372kB/s)(4376KiB/1025msec) 00:08:29.733 slat (nsec): min=6823, max=28323, avg=8193.29, stdev=2035.28 00:08:29.733 clat (usec): min=221, max=41510, avg=636.79, stdev=3694.36 00:08:29.733 lat (usec): min=229, max=41524, avg=644.99, stdev=3695.26 00:08:29.733 clat percentiles (usec): 00:08:29.733 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 251], 00:08:29.733 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:08:29.733 | 70.00th=[ 293], 80.00th=[ 318], 90.00th=[ 433], 95.00th=[ 453], 00:08:29.733 | 99.00th=[ 2343], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:08:29.733 | 99.99th=[41681] 00:08:29.733 write: IOPS=1498, BW=5994KiB/s (6138kB/s)(6144KiB/1025msec); 0 zone resets 00:08:29.733 slat (nsec): min=9017, max=54718, avg=11546.07, stdev=2334.96 00:08:29.733 clat (usec): min=147, max=375, avg=191.94, stdev=30.60 00:08:29.733 lat (usec): min=157, max=387, avg=203.49, stdev=31.09 00:08:29.733 clat percentiles (usec): 00:08:29.733 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:08:29.733 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:08:29.733 | 70.00th=[ 196], 80.00th=[ 204], 90.00th=[ 223], 95.00th=[ 269], 00:08:29.733 | 99.00th=[ 297], 99.50th=[ 351], 99.90th=[ 371], 99.95th=[ 375], 00:08:29.733 | 99.99th=[ 375] 00:08:29.733 bw ( KiB/s): min= 4096, max= 8192, per=25.68%, avg=6144.00, stdev=2896.31, samples=2 00:08:29.733 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:08:29.733 lat (usec) : 250=62.85%, 500=36.39%, 750=0.34% 00:08:29.733 lat (msec) : 4=0.04%, 10=0.04%, 50=0.34% 00:08:29.733 cpu : usr=1.27%, sys=3.32%, ctx=2631, majf=0, minf=1 00:08:29.733 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:29.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.733 issued rwts: total=1094,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.733 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:29.733 job3: (groupid=0, jobs=1): err= 0: pid=3468964: Mon Dec 9 05:03:05 2024 00:08:29.733 read: IOPS=22, BW=90.9KiB/s (93.1kB/s)(92.0KiB/1012msec) 00:08:29.733 slat (nsec): min=10181, max=24755, avg=22278.83, stdev=3854.37 00:08:29.733 clat (usec): min=479, max=41057, avg=39180.64, stdev=8437.23 00:08:29.733 lat (usec): min=503, max=41082, avg=39202.92, stdev=8436.81 00:08:29.733 clat percentiles (usec): 00:08:29.733 | 1.00th=[ 482], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:08:29.733 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:29.733 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:29.733 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:29.733 | 99.99th=[41157] 00:08:29.733 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:08:29.733 slat (nsec): min=10611, max=75404, avg=12260.86, stdev=3319.99 00:08:29.733 clat (usec): min=159, max=350, avg=199.34, stdev=21.09 00:08:29.733 lat (usec): min=171, max=426, avg=211.60, stdev=22.33 00:08:29.733 clat percentiles (usec): 00:08:29.733 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 184], 00:08:29.733 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:08:29.733 | 70.00th=[ 206], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 233], 00:08:29.733 | 99.00th=[ 285], 99.50th=[ 306], 99.90th=[ 351], 99.95th=[ 351], 00:08:29.733 | 99.99th=[ 351] 00:08:29.733 bw ( KiB/s): min= 4096, max= 4096, per=17.12%, avg=4096.00, stdev= 0.00, samples=1 00:08:29.733 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:29.733 lat (usec) : 250=93.27%, 500=2.62% 00:08:29.733 lat (msec) : 50=4.11% 00:08:29.733 cpu : usr=0.59%, sys=0.79%, ctx=537, majf=0, minf=1 00:08:29.733 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:29.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.733 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.733 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:29.733 00:08:29.733 Run status group 0 (all jobs): 00:08:29.733 READ: bw=18.2MiB/s (19.1MB/s), 90.9KiB/s-8000KiB/s (93.1kB/s-8192kB/s), io=18.7MiB (19.6MB), run=1001-1027msec 00:08:29.733 WRITE: bw=23.4MiB/s (24.5MB/s), 2024KiB/s-8184KiB/s (2072kB/s-8380kB/s), io=24.0MiB (25.2MB), run=1001-1027msec 00:08:29.733 00:08:29.733 Disk stats (read/write): 00:08:29.733 nvme0n1: ios=1586/1912, merge=0/0, ticks=455/350, in_queue=805, util=87.07% 00:08:29.733 nvme0n2: ios=1676/2048, merge=0/0, ticks=1399/337, in_queue=1736, util=98.27% 00:08:29.733 nvme0n3: ios=1133/1536, merge=0/0, ticks=1405/274, in_queue=1679, util=97.81% 00:08:29.734 nvme0n4: ios=59/512, merge=0/0, ticks=1616/96, in_queue=1712, util=98.74% 00:08:29.734 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:29.734 [global] 00:08:29.734 thread=1 00:08:29.734 invalidate=1 00:08:29.734 rw=write 00:08:29.734 time_based=1 00:08:29.734 runtime=1 00:08:29.734 ioengine=libaio 00:08:29.734 direct=1 00:08:29.734 bs=4096 00:08:29.734 iodepth=128 00:08:29.734 norandommap=0 00:08:29.734 numjobs=1 00:08:29.734 00:08:29.734 verify_dump=1 00:08:29.734 verify_backlog=512 00:08:29.734 verify_state_save=0 00:08:29.734 do_verify=1 00:08:29.734 verify=crc32c-intel 00:08:29.734 [job0] 00:08:29.734 filename=/dev/nvme0n1 00:08:29.734 [job1] 00:08:29.734 filename=/dev/nvme0n2 00:08:29.734 [job2] 00:08:29.734 filename=/dev/nvme0n3 00:08:29.734 [job3] 00:08:29.734 filename=/dev/nvme0n4 00:08:29.734 Could not set queue depth (nvme0n1) 00:08:29.734 Could not set queue depth (nvme0n2) 00:08:29.734 Could not set queue depth (nvme0n3) 00:08:29.734 Could not set queue depth (nvme0n4) 00:08:29.734 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:29.734 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:29.734 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:29.734 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:29.734 fio-3.35 00:08:29.734 Starting 4 threads 00:08:31.107 00:08:31.107 job0: (groupid=0, jobs=1): err= 0: pid=3469336: Mon Dec 9 05:03:07 2024 00:08:31.107 read: IOPS=3858, BW=15.1MiB/s (15.8MB/s)(15.1MiB/1004msec) 00:08:31.107 slat (nsec): min=1415, max=26650k, avg=107482.40, stdev=739844.21 00:08:31.107 clat (usec): min=648, max=48178, avg=13924.00, stdev=5490.70 00:08:31.107 lat (usec): min=3289, max=48186, avg=14031.48, stdev=5516.38 00:08:31.107 clat percentiles (usec): 00:08:31.107 | 1.00th=[ 6587], 5.00th=[ 9765], 10.00th=[10683], 20.00th=[11731], 00:08:31.107 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:08:31.107 | 70.00th=[12911], 80.00th=[13698], 90.00th=[20579], 95.00th=[23200], 00:08:31.107 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:31.107 | 99.99th=[47973] 00:08:31.107 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:08:31.107 slat (usec): min=2, max=11500, avg=132.80, stdev=771.21 00:08:31.107 clat (usec): min=741, max=133067, avg=17773.76, stdev=21739.33 00:08:31.107 lat (usec): min=750, max=133072, avg=17906.56, stdev=21881.66 00:08:31.107 clat percentiles (msec): 00:08:31.107 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 12], 00:08:31.107 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:08:31.107 | 70.00th=[ 13], 80.00th=[ 13], 90.00th=[ 22], 95.00th=[ 74], 00:08:31.107 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 133], 99.95th=[ 133], 00:08:31.107 | 99.99th=[ 133] 00:08:31.107 bw ( KiB/s): min=12032, max=20736, per=21.60%, avg=16384.00, stdev=6154.66, samples=2 00:08:31.107 iops : min= 3008, max= 5184, avg=4096.00, stdev=1538.66, samples=2 00:08:31.107 lat (usec) : 750=0.04%, 1000=0.01% 00:08:31.107 lat (msec) : 4=0.40%, 10=8.93%, 20=78.44%, 50=8.97%, 100=1.41% 00:08:31.107 lat (msec) : 250=1.79% 00:08:31.107 cpu : usr=2.39%, sys=4.69%, ctx=448, majf=0, minf=1 00:08:31.107 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:31.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:31.107 issued rwts: total=3874,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.107 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:31.107 job1: (groupid=0, jobs=1): err= 0: pid=3469344: Mon Dec 9 05:03:07 2024 00:08:31.107 read: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec) 00:08:31.107 slat (nsec): min=1252, max=11098k, avg=96819.28, stdev=712123.38 00:08:31.107 clat (usec): min=4156, max=23425, avg=12003.19, stdev=3116.52 00:08:31.107 lat (usec): min=4165, max=25786, avg=12100.00, stdev=3168.18 00:08:31.107 clat percentiles (usec): 00:08:31.107 | 1.00th=[ 4686], 5.00th=[ 7832], 10.00th=[ 9241], 20.00th=[10028], 00:08:31.107 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11338], 60.00th=[11863], 00:08:31.107 | 70.00th=[12256], 80.00th=[13698], 90.00th=[17171], 95.00th=[18744], 00:08:31.107 | 99.00th=[20841], 99.50th=[21627], 99.90th=[22414], 99.95th=[22414], 00:08:31.107 | 99.99th=[23462] 00:08:31.107 write: IOPS=5926, BW=23.2MiB/s (24.3MB/s)(23.3MiB/1008msec); 0 zone resets 00:08:31.107 slat (usec): min=2, max=8874, avg=70.75, stdev=380.72 00:08:31.107 clat (usec): min=1516, max=22356, avg=10083.10, stdev=2236.97 00:08:31.107 lat (usec): min=1530, max=22360, avg=10153.86, stdev=2276.85 00:08:31.107 clat percentiles (usec): 00:08:31.107 | 1.00th=[ 3359], 5.00th=[ 5342], 10.00th=[ 7111], 20.00th=[ 8979], 00:08:31.107 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[10945], 00:08:31.107 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11863], 95.00th=[11994], 00:08:31.107 | 99.00th=[15664], 99.50th=[17957], 99.90th=[21627], 99.95th=[22152], 00:08:31.107 | 99.99th=[22414] 00:08:31.107 bw ( KiB/s): min=22200, max=24576, per=30.83%, avg=23388.00, stdev=1680.09, samples=2 00:08:31.107 iops : min= 5550, max= 6144, avg=5847.00, stdev=420.02, samples=2 00:08:31.107 lat (msec) : 2=0.02%, 4=1.01%, 10=30.55%, 20=67.15%, 50=1.28% 00:08:31.107 cpu : usr=3.48%, sys=6.95%, ctx=645, majf=0, minf=1 00:08:31.107 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:08:31.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:31.107 issued rwts: total=5632,5974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.107 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:31.107 job2: (groupid=0, jobs=1): err= 0: pid=3469352: Mon Dec 9 05:03:07 2024 00:08:31.107 read: IOPS=4167, BW=16.3MiB/s (17.1MB/s)(17.0MiB/1044msec) 00:08:31.107 slat (nsec): min=1432, max=12219k, avg=117236.61, stdev=854516.36 00:08:31.107 clat (usec): min=5008, max=55392, avg=15835.96, stdev=7711.13 00:08:31.107 lat (usec): min=5014, max=55396, avg=15953.20, stdev=7734.47 00:08:31.107 clat percentiles (usec): 00:08:31.107 | 1.00th=[ 7963], 5.00th=[ 9896], 10.00th=[11731], 20.00th=[12518], 00:08:31.107 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13698], 60.00th=[14091], 00:08:31.107 | 70.00th=[14746], 80.00th=[18220], 90.00th=[21365], 95.00th=[23725], 00:08:31.107 | 99.00th=[55313], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:08:31.108 | 99.99th=[55313] 00:08:31.108 write: IOPS=4413, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1044msec); 0 zone resets 00:08:31.108 slat (usec): min=2, max=15539, avg=99.00, stdev=644.66 00:08:31.108 clat (usec): min=1047, max=41725, avg=13525.57, stdev=5544.82 00:08:31.108 lat (usec): min=1061, max=41732, avg=13624.57, stdev=5595.80 00:08:31.108 clat percentiles (usec): 00:08:31.108 | 1.00th=[ 4228], 5.00th=[ 7504], 10.00th=[ 9896], 20.00th=[11207], 00:08:31.108 | 30.00th=[11469], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:08:31.108 | 70.00th=[13304], 80.00th=[13698], 90.00th=[20317], 95.00th=[23725], 00:08:31.108 | 99.00th=[39584], 99.50th=[40109], 99.90th=[41681], 99.95th=[41681], 00:08:31.108 | 99.99th=[41681] 00:08:31.108 bw ( KiB/s): min=16432, max=20432, per=24.30%, avg=18432.00, stdev=2828.43, samples=2 00:08:31.108 iops : min= 4108, max= 5108, avg=4608.00, stdev=707.11, samples=2 00:08:31.108 lat (msec) : 2=0.07%, 4=0.31%, 10=7.95%, 20=78.94%, 50=11.36% 00:08:31.108 lat (msec) : 100=1.37% 00:08:31.108 cpu : usr=4.12%, sys=5.08%, ctx=439, majf=0, minf=1 00:08:31.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:08:31.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:31.108 issued rwts: total=4351,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:31.108 job3: (groupid=0, jobs=1): err= 0: pid=3469359: Mon Dec 9 05:03:07 2024 00:08:31.108 read: IOPS=4835, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1005msec) 00:08:31.108 slat (nsec): min=1422, max=12458k, avg=111039.90, stdev=774840.12 00:08:31.108 clat (usec): min=2599, max=26440, avg=13733.35, stdev=3572.56 00:08:31.108 lat (usec): min=4298, max=26471, avg=13844.39, stdev=3617.03 00:08:31.108 clat percentiles (usec): 00:08:31.108 | 1.00th=[ 5473], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[11207], 00:08:31.108 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13173], 60.00th=[13566], 00:08:31.108 | 70.00th=[14091], 80.00th=[15139], 90.00th=[19530], 95.00th=[21627], 00:08:31.108 | 99.00th=[24249], 99.50th=[24773], 99.90th=[25560], 99.95th=[25560], 00:08:31.108 | 99.99th=[26346] 00:08:31.108 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:08:31.108 slat (usec): min=2, max=12252, avg=84.87, stdev=434.52 00:08:31.108 clat (usec): min=1816, max=25854, avg=11757.01, stdev=3028.92 00:08:31.108 lat (usec): min=1830, max=25867, avg=11841.88, stdev=3058.22 00:08:31.108 clat percentiles (usec): 00:08:31.108 | 1.00th=[ 4113], 5.00th=[ 5800], 10.00th=[ 8455], 20.00th=[10028], 00:08:31.108 | 30.00th=[11207], 40.00th=[11469], 50.00th=[12125], 60.00th=[12649], 00:08:31.108 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13698], 95.00th=[13829], 00:08:31.108 | 99.00th=[24249], 99.50th=[24773], 99.90th=[25560], 99.95th=[25822], 00:08:31.108 | 99.99th=[25822] 00:08:31.108 bw ( KiB/s): min=19600, max=21360, per=27.00%, avg=20480.00, stdev=1244.51, samples=2 00:08:31.108 iops : min= 4900, max= 5340, avg=5120.00, stdev=311.13, samples=2 00:08:31.108 lat (msec) : 2=0.03%, 4=0.36%, 10=13.96%, 20=79.90%, 50=5.75% 00:08:31.108 cpu : usr=3.78%, sys=5.18%, ctx=614, majf=0, minf=1 00:08:31.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:31.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:31.108 issued rwts: total=4860,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:31.108 00:08:31.108 Run status group 0 (all jobs): 00:08:31.108 READ: bw=70.0MiB/s (73.4MB/s), 15.1MiB/s-21.8MiB/s (15.8MB/s-22.9MB/s), io=73.1MiB (76.7MB), run=1004-1044msec 00:08:31.108 WRITE: bw=74.1MiB/s (77.7MB/s), 15.9MiB/s-23.2MiB/s (16.7MB/s-24.3MB/s), io=77.3MiB (81.1MB), run=1004-1044msec 00:08:31.108 00:08:31.108 Disk stats (read/write): 00:08:31.108 nvme0n1: ios=3107/3297, merge=0/0, ticks=23652/41330, in_queue=64982, util=96.49% 00:08:31.108 nvme0n2: ios=4725/5120, merge=0/0, ticks=54650/50226, in_queue=104876, util=86.59% 00:08:31.108 nvme0n3: ios=3626/3919, merge=0/0, ticks=51336/51910, in_queue=103246, util=99.37% 00:08:31.108 nvme0n4: ios=4137/4423, merge=0/0, ticks=49856/44335, in_queue=94191, util=98.00% 00:08:31.108 05:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:31.108 [global] 00:08:31.108 thread=1 00:08:31.108 invalidate=1 00:08:31.108 rw=randwrite 00:08:31.108 time_based=1 00:08:31.108 runtime=1 00:08:31.108 ioengine=libaio 00:08:31.108 direct=1 00:08:31.108 bs=4096 00:08:31.108 iodepth=128 00:08:31.108 norandommap=0 00:08:31.108 numjobs=1 00:08:31.108 00:08:31.108 verify_dump=1 00:08:31.108 verify_backlog=512 00:08:31.108 verify_state_save=0 00:08:31.108 do_verify=1 00:08:31.108 verify=crc32c-intel 00:08:31.108 [job0] 00:08:31.108 filename=/dev/nvme0n1 00:08:31.108 [job1] 00:08:31.108 filename=/dev/nvme0n2 00:08:31.108 [job2] 00:08:31.108 filename=/dev/nvme0n3 00:08:31.108 [job3] 00:08:31.108 filename=/dev/nvme0n4 00:08:31.108 Could not set queue depth (nvme0n1) 00:08:31.108 Could not set queue depth (nvme0n2) 00:08:31.108 Could not set queue depth (nvme0n3) 00:08:31.108 Could not set queue depth (nvme0n4) 00:08:31.366 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:31.366 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:31.366 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:31.366 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:31.366 fio-3.35 00:08:31.366 Starting 4 threads 00:08:32.741 00:08:32.741 job0: (groupid=0, jobs=1): err= 0: pid=3469817: Mon Dec 9 05:03:09 2024 00:08:32.741 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:08:32.741 slat (nsec): min=1414, max=11021k, avg=98377.99, stdev=617683.07 00:08:32.741 clat (usec): min=4318, max=35291, avg=12295.12, stdev=2306.06 00:08:32.741 lat (usec): min=4324, max=35294, avg=12393.50, stdev=2344.70 00:08:32.741 clat percentiles (usec): 00:08:32.741 | 1.00th=[ 7767], 5.00th=[ 8979], 10.00th=[10028], 20.00th=[11207], 00:08:32.741 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:08:32.741 | 70.00th=[12518], 80.00th=[13042], 90.00th=[14484], 95.00th=[15926], 00:08:32.741 | 99.00th=[22152], 99.50th=[22938], 99.90th=[27395], 99.95th=[27395], 00:08:32.741 | 99.99th=[35390] 00:08:32.741 write: IOPS=5613, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:08:32.741 slat (usec): min=2, max=5355, avg=82.24, stdev=410.55 00:08:32.741 clat (usec): min=524, max=23493, avg=11400.12, stdev=2113.04 00:08:32.741 lat (usec): min=1457, max=23496, avg=11482.36, stdev=2137.61 00:08:32.741 clat percentiles (usec): 00:08:32.741 | 1.00th=[ 3621], 5.00th=[ 7898], 10.00th=[ 9765], 20.00th=[11076], 00:08:32.741 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:08:32.741 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12256], 95.00th=[13304], 00:08:32.741 | 99.00th=[19530], 99.50th=[21365], 99.90th=[22938], 99.95th=[23462], 00:08:32.741 | 99.99th=[23462] 00:08:32.741 bw ( KiB/s): min=21336, max=22648, per=26.29%, avg=21992.00, stdev=927.72, samples=2 00:08:32.741 iops : min= 5334, max= 5662, avg=5498.00, stdev=231.93, samples=2 00:08:32.741 lat (usec) : 750=0.01% 00:08:32.741 lat (msec) : 2=0.09%, 4=0.76%, 10=9.28%, 20=88.28%, 50=1.57% 00:08:32.741 cpu : usr=3.50%, sys=6.49%, ctx=612, majf=0, minf=1 00:08:32.741 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:08:32.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:32.741 issued rwts: total=5120,5625,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.741 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:32.741 job1: (groupid=0, jobs=1): err= 0: pid=3469836: Mon Dec 9 05:03:09 2024 00:08:32.741 read: IOPS=5231, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1003msec) 00:08:32.741 slat (nsec): min=1318, max=12221k, avg=93550.15, stdev=582218.06 00:08:32.741 clat (usec): min=1250, max=26087, avg=11935.37, stdev=2452.43 00:08:32.741 lat (usec): min=5125, max=26093, avg=12028.92, stdev=2487.94 00:08:32.741 clat percentiles (usec): 00:08:32.741 | 1.00th=[ 5538], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[10945], 00:08:32.741 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11863], 60.00th=[11994], 00:08:32.741 | 70.00th=[12387], 80.00th=[12780], 90.00th=[14615], 95.00th=[16057], 00:08:32.741 | 99.00th=[22152], 99.50th=[22938], 99.90th=[24773], 99.95th=[24773], 00:08:32.741 | 99.99th=[26084] 00:08:32.741 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:08:32.741 slat (usec): min=2, max=9033, avg=84.84, stdev=469.32 00:08:32.741 clat (usec): min=1412, max=24964, avg=11470.62, stdev=1697.44 00:08:32.741 lat (usec): min=1429, max=25694, avg=11555.46, stdev=1751.42 00:08:32.741 clat percentiles (usec): 00:08:32.741 | 1.00th=[ 5866], 5.00th=[ 8586], 10.00th=[10028], 20.00th=[10945], 00:08:32.741 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11731], 60.00th=[11731], 00:08:32.741 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12256], 95.00th=[12911], 00:08:32.741 | 99.00th=[16581], 99.50th=[17171], 99.90th=[25035], 99.95th=[25035], 00:08:32.741 | 99.99th=[25035] 00:08:32.741 bw ( KiB/s): min=22336, max=22712, per=26.93%, avg=22524.00, stdev=265.87, samples=2 00:08:32.741 iops : min= 5584, max= 5678, avg=5631.00, stdev=66.47, samples=2 00:08:32.741 lat (msec) : 2=0.06%, 10=12.57%, 20=86.30%, 50=1.07% 00:08:32.741 cpu : usr=3.59%, sys=6.89%, ctx=578, majf=0, minf=1 00:08:32.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:08:32.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:32.742 issued rwts: total=5247,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:32.742 job2: (groupid=0, jobs=1): err= 0: pid=3469872: Mon Dec 9 05:03:09 2024 00:08:32.742 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:08:32.742 slat (nsec): min=1374, max=9473.3k, avg=108368.95, stdev=644168.22 00:08:32.742 clat (usec): min=5422, max=23594, avg=13587.20, stdev=1948.19 00:08:32.742 lat (usec): min=5432, max=23604, avg=13695.57, stdev=2000.78 00:08:32.742 clat percentiles (usec): 00:08:32.742 | 1.00th=[ 8717], 5.00th=[10028], 10.00th=[11076], 20.00th=[12780], 00:08:32.742 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13435], 60.00th=[13698], 00:08:32.742 | 70.00th=[13829], 80.00th=[14484], 90.00th=[16057], 95.00th=[16909], 00:08:32.742 | 99.00th=[19268], 99.50th=[20055], 99.90th=[23462], 99.95th=[23462], 00:08:32.742 | 99.99th=[23725] 00:08:32.742 write: IOPS=4735, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1001msec); 0 zone resets 00:08:32.742 slat (usec): min=2, max=11181, avg=99.14, stdev=526.96 00:08:32.742 clat (usec): min=728, max=24772, avg=13581.37, stdev=2430.68 00:08:32.742 lat (usec): min=1549, max=24776, avg=13680.51, stdev=2469.60 00:08:32.742 clat percentiles (usec): 00:08:32.742 | 1.00th=[ 6718], 5.00th=[ 9634], 10.00th=[11207], 20.00th=[12518], 00:08:32.742 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13698], 00:08:32.742 | 70.00th=[13960], 80.00th=[14484], 90.00th=[16188], 95.00th=[17957], 00:08:32.742 | 99.00th=[22152], 99.50th=[22938], 99.90th=[23725], 99.95th=[24511], 00:08:32.742 | 99.99th=[24773] 00:08:32.742 bw ( KiB/s): min=19640, max=19640, per=23.48%, avg=19640.00, stdev= 0.00, samples=1 00:08:32.742 iops : min= 4910, max= 4910, avg=4910.00, stdev= 0.00, samples=1 00:08:32.742 lat (usec) : 750=0.01% 00:08:32.742 lat (msec) : 2=0.02%, 10=5.38%, 20=93.00%, 50=1.58% 00:08:32.742 cpu : usr=3.80%, sys=5.60%, ctx=565, majf=0, minf=1 00:08:32.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:08:32.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:32.742 issued rwts: total=4608,4740,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:32.742 job3: (groupid=0, jobs=1): err= 0: pid=3469883: Mon Dec 9 05:03:09 2024 00:08:32.742 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:08:32.742 slat (nsec): min=1323, max=12478k, avg=112337.43, stdev=865797.95 00:08:32.742 clat (usec): min=4788, max=26479, avg=14118.33, stdev=2734.19 00:08:32.742 lat (usec): min=4794, max=31999, avg=14230.67, stdev=2842.25 00:08:32.742 clat percentiles (usec): 00:08:32.742 | 1.00th=[ 8586], 5.00th=[11469], 10.00th=[12256], 20.00th=[12780], 00:08:32.742 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:08:32.742 | 70.00th=[13960], 80.00th=[15139], 90.00th=[18220], 95.00th=[20317], 00:08:32.742 | 99.00th=[23462], 99.50th=[24249], 99.90th=[25035], 99.95th=[25822], 00:08:32.742 | 99.99th=[26608] 00:08:32.742 write: IOPS=4978, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1004msec); 0 zone resets 00:08:32.742 slat (usec): min=2, max=11237, avg=90.56, stdev=687.29 00:08:32.742 clat (usec): min=1552, max=24946, avg=12451.97, stdev=2449.93 00:08:32.742 lat (usec): min=1565, max=24980, avg=12542.53, stdev=2534.36 00:08:32.742 clat percentiles (usec): 00:08:32.742 | 1.00th=[ 3949], 5.00th=[ 7570], 10.00th=[10028], 20.00th=[11469], 00:08:32.742 | 30.00th=[12256], 40.00th=[12649], 50.00th=[12780], 60.00th=[13173], 00:08:32.742 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13698], 95.00th=[14222], 00:08:32.742 | 99.00th=[22938], 99.50th=[22938], 99.90th=[24249], 99.95th=[24511], 00:08:32.742 | 99.99th=[25035] 00:08:32.742 bw ( KiB/s): min=18488, max=20480, per=23.29%, avg=19484.00, stdev=1408.56, samples=2 00:08:32.742 iops : min= 4622, max= 5120, avg=4871.00, stdev=352.14, samples=2 00:08:32.742 lat (msec) : 2=0.02%, 4=0.53%, 10=5.61%, 20=90.60%, 50=3.24% 00:08:32.742 cpu : usr=3.79%, sys=6.08%, ctx=380, majf=0, minf=1 00:08:32.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:08:32.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:32.742 issued rwts: total=4608,4998,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:32.742 00:08:32.742 Run status group 0 (all jobs): 00:08:32.742 READ: bw=76.2MiB/s (79.9MB/s), 17.9MiB/s-20.4MiB/s (18.8MB/s-21.4MB/s), io=76.5MiB (80.2MB), run=1001-1004msec 00:08:32.742 WRITE: bw=81.7MiB/s (85.7MB/s), 18.5MiB/s-21.9MiB/s (19.4MB/s-23.0MB/s), io=82.0MiB (86.0MB), run=1001-1004msec 00:08:32.742 00:08:32.742 Disk stats (read/write): 00:08:32.742 nvme0n1: ios=4148/4607, merge=0/0, ticks=29041/29542, in_queue=58583, util=97.39% 00:08:32.742 nvme0n2: ios=4096/4595, merge=0/0, ticks=28052/30030, in_queue=58082, util=82.92% 00:08:32.742 nvme0n3: ios=3605/3847, merge=0/0, ticks=29113/29611, in_queue=58724, util=97.29% 00:08:32.742 nvme0n4: ios=3603/4028, merge=0/0, ticks=50074/48626, in_queue=98700, util=96.13% 00:08:32.742 05:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:32.742 05:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3470120 00:08:32.742 05:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:32.742 05:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:32.742 [global] 00:08:32.742 thread=1 00:08:32.742 invalidate=1 00:08:32.742 rw=read 00:08:32.742 time_based=1 00:08:32.742 runtime=10 00:08:32.742 ioengine=libaio 00:08:32.742 direct=1 00:08:32.742 bs=4096 00:08:32.742 iodepth=1 00:08:32.742 norandommap=1 00:08:32.742 numjobs=1 00:08:32.742 00:08:32.742 [job0] 00:08:32.742 filename=/dev/nvme0n1 00:08:32.742 [job1] 00:08:32.742 filename=/dev/nvme0n2 00:08:32.742 [job2] 00:08:32.742 filename=/dev/nvme0n3 00:08:32.742 [job3] 00:08:32.742 filename=/dev/nvme0n4 00:08:32.742 Could not set queue depth (nvme0n1) 00:08:32.742 Could not set queue depth (nvme0n2) 00:08:32.742 Could not set queue depth (nvme0n3) 00:08:32.742 Could not set queue depth (nvme0n4) 00:08:33.000 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:33.000 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:33.000 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:33.000 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:33.000 fio-3.35 00:08:33.000 Starting 4 threads 00:08:35.535 05:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:35.794 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=27705344, buflen=4096 00:08:35.794 fio: pid=3470525, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:35.795 05:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:36.054 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=31039488, buflen=4096 00:08:36.054 fio: pid=3470521, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:36.054 05:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:36.054 05:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:36.314 05:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:36.314 05:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:36.314 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=24231936, buflen=4096 00:08:36.314 fio: pid=3470492, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:36.573 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=23564288, buflen=4096 00:08:36.573 fio: pid=3470505, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:36.573 05:03:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:36.573 05:03:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:36.573 00:08:36.573 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3470492: Mon Dec 9 05:03:13 2024 00:08:36.573 read: IOPS=1861, BW=7444KiB/s (7623kB/s)(23.1MiB/3179msec) 00:08:36.573 slat (usec): min=6, max=16694, avg=11.02, stdev=216.94 00:08:36.573 clat (usec): min=221, max=42046, avg=520.83, stdev=2433.06 00:08:36.573 lat (usec): min=229, max=58064, avg=531.85, stdev=2490.25 00:08:36.573 clat percentiles (usec): 00:08:36.573 | 1.00th=[ 245], 5.00th=[ 273], 10.00th=[ 297], 20.00th=[ 314], 00:08:36.573 | 30.00th=[ 334], 40.00th=[ 355], 50.00th=[ 367], 60.00th=[ 379], 00:08:36.573 | 70.00th=[ 408], 80.00th=[ 445], 90.00th=[ 482], 95.00th=[ 502], 00:08:36.573 | 99.00th=[ 545], 99.50th=[ 594], 99.90th=[41681], 99.95th=[42206], 00:08:36.573 | 99.99th=[42206] 00:08:36.573 bw ( KiB/s): min= 93, max=10808, per=25.57%, avg=7882.17, stdev=4307.91, samples=6 00:08:36.573 iops : min= 23, max= 2702, avg=1970.50, stdev=1077.07, samples=6 00:08:36.573 lat (usec) : 250=1.72%, 500=93.27%, 750=4.61%, 1000=0.02% 00:08:36.573 lat (msec) : 50=0.35% 00:08:36.573 cpu : usr=0.98%, sys=3.12%, ctx=5919, majf=0, minf=2 00:08:36.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:36.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.574 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.574 issued rwts: total=5917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:36.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:36.574 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3470505: Mon Dec 9 05:03:13 2024 00:08:36.574 read: IOPS=1704, BW=6818KiB/s (6982kB/s)(22.5MiB/3375msec) 00:08:36.574 slat (usec): min=6, max=11793, avg=11.89, stdev=155.35 00:08:36.574 clat (usec): min=209, max=44662, avg=568.84, stdev=2692.63 00:08:36.574 lat (usec): min=218, max=53064, avg=580.72, stdev=2728.71 00:08:36.574 clat percentiles (usec): 00:08:36.574 | 1.00th=[ 235], 5.00th=[ 255], 10.00th=[ 273], 20.00th=[ 318], 00:08:36.574 | 30.00th=[ 347], 40.00th=[ 363], 50.00th=[ 379], 60.00th=[ 408], 00:08:36.574 | 70.00th=[ 441], 80.00th=[ 478], 90.00th=[ 502], 95.00th=[ 519], 00:08:36.574 | 99.00th=[ 570], 99.50th=[ 775], 99.90th=[41157], 99.95th=[42206], 00:08:36.574 | 99.99th=[44827] 00:08:36.574 bw ( KiB/s): min= 2264, max=10552, per=24.62%, avg=7589.33, stdev=3153.59, samples=6 00:08:36.574 iops : min= 566, max= 2638, avg=1897.33, stdev=788.40, samples=6 00:08:36.574 lat (usec) : 250=3.75%, 500=84.95%, 750=10.78%, 1000=0.03% 00:08:36.574 lat (msec) : 2=0.02%, 10=0.02%, 50=0.43% 00:08:36.574 cpu : usr=0.98%, sys=3.11%, ctx=5756, majf=0, minf=2 00:08:36.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:36.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.574 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.574 issued rwts: total=5754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:36.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:36.574 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3470521: Mon Dec 9 05:03:13 2024 00:08:36.574 read: IOPS=2577, BW=10.1MiB/s (10.6MB/s)(29.6MiB/2941msec) 00:08:36.574 slat (nsec): min=6439, max=70871, avg=7615.59, stdev=1362.22 00:08:36.574 clat (usec): min=213, max=41993, avg=376.49, stdev=1148.16 00:08:36.574 lat (usec): min=228, max=42008, avg=384.11, stdev=1148.35 00:08:36.574 clat percentiles (usec): 00:08:36.574 | 1.00th=[ 233], 5.00th=[ 245], 10.00th=[ 253], 20.00th=[ 273], 00:08:36.574 | 30.00th=[ 302], 40.00th=[ 318], 50.00th=[ 338], 60.00th=[ 359], 00:08:36.574 | 70.00th=[ 371], 80.00th=[ 404], 90.00th=[ 453], 95.00th=[ 482], 00:08:36.574 | 99.00th=[ 519], 99.50th=[ 545], 99.90th=[ 742], 99.95th=[41157], 00:08:36.574 | 99.99th=[42206] 00:08:36.574 bw ( KiB/s): min= 5720, max=11120, per=31.90%, avg=9835.20, stdev=2310.35, samples=5 00:08:36.574 iops : min= 1430, max= 2780, avg=2458.80, stdev=577.59, samples=5 00:08:36.574 lat (usec) : 250=7.61%, 500=90.03%, 750=2.26% 00:08:36.574 lat (msec) : 4=0.01%, 50=0.08% 00:08:36.574 cpu : usr=0.54%, sys=2.52%, ctx=7580, majf=0, minf=1 00:08:36.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:36.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.574 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.574 issued rwts: total=7579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:36.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:36.574 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3470525: Mon Dec 9 05:03:13 2024 00:08:36.574 read: IOPS=2480, BW=9922KiB/s (10.2MB/s)(26.4MiB/2727msec) 00:08:36.574 slat (nsec): min=6490, max=32267, avg=7948.75, stdev=1257.22 00:08:36.574 clat (usec): min=211, max=41929, avg=390.42, stdev=1497.52 00:08:36.574 lat (usec): min=219, max=41953, avg=398.37, stdev=1498.00 00:08:36.574 clat percentiles (usec): 00:08:36.574 | 1.00th=[ 231], 5.00th=[ 243], 10.00th=[ 253], 20.00th=[ 277], 00:08:36.574 | 30.00th=[ 297], 40.00th=[ 314], 50.00th=[ 330], 60.00th=[ 343], 00:08:36.574 | 70.00th=[ 359], 80.00th=[ 375], 90.00th=[ 429], 95.00th=[ 469], 00:08:36.574 | 99.00th=[ 502], 99.50th=[ 519], 99.90th=[40633], 99.95th=[41157], 00:08:36.574 | 99.99th=[41681] 00:08:36.574 bw ( KiB/s): min= 5032, max=11680, per=31.71%, avg=9777.60, stdev=2830.83, samples=5 00:08:36.574 iops : min= 1258, max= 2920, avg=2444.40, stdev=707.71, samples=5 00:08:36.574 lat (usec) : 250=8.12%, 500=90.81%, 750=0.90% 00:08:36.574 lat (msec) : 4=0.01%, 20=0.01%, 50=0.13% 00:08:36.574 cpu : usr=0.59%, sys=2.57%, ctx=6765, majf=0, minf=2 00:08:36.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:36.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.574 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.574 issued rwts: total=6765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:36.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:36.574 00:08:36.574 Run status group 0 (all jobs): 00:08:36.574 READ: bw=30.1MiB/s (31.6MB/s), 6818KiB/s-10.1MiB/s (6982kB/s-10.6MB/s), io=102MiB (107MB), run=2727-3375msec 00:08:36.574 00:08:36.574 Disk stats (read/write): 00:08:36.574 nvme0n1: ios=5914/0, merge=0/0, ticks=2924/0, in_queue=2924, util=95.13% 00:08:36.574 nvme0n2: ios=5752/0, merge=0/0, ticks=3140/0, in_queue=3140, util=95.98% 00:08:36.574 nvme0n3: ios=7321/0, merge=0/0, ticks=2761/0, in_queue=2761, util=96.52% 00:08:36.574 nvme0n4: ios=6390/0, merge=0/0, ticks=2519/0, in_queue=2519, util=96.44% 00:08:36.833 05:03:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:36.833 05:03:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:36.833 05:03:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:36.833 05:03:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:08:37.093 05:03:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:37.093 05:03:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:08:37.352 05:03:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:37.352 05:03:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:08:37.611 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:08:37.611 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3470120 00:08:37.611 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:08:37.611 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:37.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.611 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:37.611 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:08:37.611 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:37.611 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.611 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:37.611 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.611 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:08:37.611 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:08:37.611 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:08:37.611 nvmf hotplug test: fio failed as expected 00:08:37.611 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.869 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:08:37.869 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:08:37.869 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:08:37.869 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:08:37.869 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:08:37.869 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:37.869 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:08:37.869 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:37.869 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:08:37.869 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:37.869 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:37.869 rmmod nvme_tcp 00:08:37.869 rmmod nvme_fabrics 00:08:37.869 rmmod nvme_keyring 00:08:37.869 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:37.869 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:08:37.869 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:08:37.869 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3467104 ']' 00:08:37.869 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3467104 00:08:37.869 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3467104 ']' 00:08:37.869 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3467104 00:08:37.869 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:08:37.869 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.870 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3467104 00:08:38.128 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:38.128 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:38.128 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3467104' 00:08:38.128 killing process with pid 3467104 00:08:38.128 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3467104 00:08:38.128 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3467104 00:08:38.128 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:38.128 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:38.128 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:38.128 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:08:38.128 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:08:38.128 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:38.128 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:08:38.128 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:38.128 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:38.128 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.128 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.128 05:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.659 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:40.659 00:08:40.659 real 0m26.737s 00:08:40.659 user 1m46.438s 00:08:40.659 sys 0m8.873s 00:08:40.659 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.659 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:40.660 ************************************ 00:08:40.660 END TEST nvmf_fio_target 00:08:40.660 ************************************ 00:08:40.660 05:03:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:40.660 05:03:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:40.660 05:03:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.660 05:03:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:40.660 ************************************ 00:08:40.660 START TEST nvmf_bdevio 00:08:40.660 ************************************ 00:08:40.660 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:40.660 * Looking for test storage... 00:08:40.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:40.660 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:40.660 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:08:40.660 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:40.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.660 --rc genhtml_branch_coverage=1 00:08:40.660 --rc genhtml_function_coverage=1 00:08:40.660 --rc genhtml_legend=1 00:08:40.660 --rc geninfo_all_blocks=1 00:08:40.660 --rc geninfo_unexecuted_blocks=1 00:08:40.660 00:08:40.660 ' 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:40.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.660 --rc genhtml_branch_coverage=1 00:08:40.660 --rc genhtml_function_coverage=1 00:08:40.660 --rc genhtml_legend=1 00:08:40.660 --rc geninfo_all_blocks=1 00:08:40.660 --rc geninfo_unexecuted_blocks=1 00:08:40.660 00:08:40.660 ' 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:40.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.660 --rc genhtml_branch_coverage=1 00:08:40.660 --rc genhtml_function_coverage=1 00:08:40.660 --rc genhtml_legend=1 00:08:40.660 --rc geninfo_all_blocks=1 00:08:40.660 --rc geninfo_unexecuted_blocks=1 00:08:40.660 00:08:40.660 ' 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:40.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.660 --rc genhtml_branch_coverage=1 00:08:40.660 --rc genhtml_function_coverage=1 00:08:40.660 --rc genhtml_legend=1 00:08:40.660 --rc geninfo_all_blocks=1 00:08:40.660 --rc geninfo_unexecuted_blocks=1 00:08:40.660 00:08:40.660 ' 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.660 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:40.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:08:40.661 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:45.933 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.933 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:08:45.933 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:45.933 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:45.933 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:45.933 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:45.933 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:45.933 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:08:45.933 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:45.933 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:08:45.933 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:08:45.933 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:08:45.933 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:08:45.933 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:08:45.933 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:08:45.933 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.933 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.933 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.933 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.933 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.933 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:45.934 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:45.934 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:45.934 Found net devices under 0000:86:00.0: cvl_0_0 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:45.934 Found net devices under 0000:86:00.1: cvl_0_1 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.934 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:46.193 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:46.193 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.193 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.193 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.193 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.193 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:46.193 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.193 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.193 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.193 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:46.193 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:46.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:08:46.193 00:08:46.193 --- 10.0.0.2 ping statistics --- 00:08:46.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.193 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:08:46.193 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:08:46.193 00:08:46.193 --- 10.0.0.1 ping statistics --- 00:08:46.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.193 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:08:46.193 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.193 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:08:46.193 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:46.193 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.193 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:46.193 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:46.193 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.193 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:46.193 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:46.452 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:08:46.452 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:46.452 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:46.452 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:46.452 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3474945 00:08:46.452 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3474945 00:08:46.452 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:08:46.452 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3474945 ']' 00:08:46.452 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.452 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.452 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.452 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.452 05:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:46.452 [2024-12-09 05:03:22.909088] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:08:46.452 [2024-12-09 05:03:22.909141] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.452 [2024-12-09 05:03:22.979362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.452 [2024-12-09 05:03:23.021931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.452 [2024-12-09 05:03:23.021974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.452 [2024-12-09 05:03:23.021982] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.452 [2024-12-09 05:03:23.021989] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.452 [2024-12-09 05:03:23.021995] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.452 [2024-12-09 05:03:23.023713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:46.452 [2024-12-09 05:03:23.023826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:46.452 [2024-12-09 05:03:23.023935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.452 [2024-12-09 05:03:23.023935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:46.711 [2024-12-09 05:03:23.162452] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:46.711 Malloc0 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:46.711 [2024-12-09 05:03:23.236587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:46.711 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:46.711 { 00:08:46.711 "params": { 00:08:46.711 "name": "Nvme$subsystem", 00:08:46.711 "trtype": "$TEST_TRANSPORT", 00:08:46.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:46.711 "adrfam": "ipv4", 00:08:46.711 "trsvcid": "$NVMF_PORT", 00:08:46.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:46.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:46.712 "hdgst": ${hdgst:-false}, 00:08:46.712 "ddgst": ${ddgst:-false} 00:08:46.712 }, 00:08:46.712 "method": "bdev_nvme_attach_controller" 00:08:46.712 } 00:08:46.712 EOF 00:08:46.712 )") 00:08:46.712 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:08:46.712 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:08:46.712 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:08:46.712 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:46.712 "params": { 00:08:46.712 "name": "Nvme1", 00:08:46.712 "trtype": "tcp", 00:08:46.712 "traddr": "10.0.0.2", 00:08:46.712 "adrfam": "ipv4", 00:08:46.712 "trsvcid": "4420", 00:08:46.712 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:46.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:46.712 "hdgst": false, 00:08:46.712 "ddgst": false 00:08:46.712 }, 00:08:46.712 "method": "bdev_nvme_attach_controller" 00:08:46.712 }' 00:08:46.712 [2024-12-09 05:03:23.290600] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:08:46.712 [2024-12-09 05:03:23.290642] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3475098 ] 00:08:46.969 [2024-12-09 05:03:23.356083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:46.969 [2024-12-09 05:03:23.400554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.969 [2024-12-09 05:03:23.400651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.969 [2024-12-09 05:03:23.400651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.226 I/O targets: 00:08:47.226 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:08:47.226 00:08:47.226 00:08:47.226 CUnit - A unit testing framework for C - Version 2.1-3 00:08:47.226 http://cunit.sourceforge.net/ 00:08:47.226 00:08:47.226 00:08:47.226 Suite: bdevio tests on: Nvme1n1 00:08:47.226 Test: blockdev write read block ...passed 00:08:47.226 Test: blockdev write zeroes read block ...passed 00:08:47.226 Test: blockdev write zeroes read no split ...passed 00:08:47.483 Test: blockdev write zeroes read split ...passed 00:08:47.483 Test: blockdev write zeroes read split partial ...passed 00:08:47.483 Test: blockdev reset ...[2024-12-09 05:03:23.922764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:08:47.483 [2024-12-09 05:03:23.922836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x882350 (9): Bad file descriptor 00:08:47.483 [2024-12-09 05:03:24.065279] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:08:47.483 passed 00:08:47.483 Test: blockdev write read 8 blocks ...passed 00:08:47.483 Test: blockdev write read size > 128k ...passed 00:08:47.483 Test: blockdev write read invalid size ...passed 00:08:47.483 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:47.483 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:47.483 Test: blockdev write read max offset ...passed 00:08:47.741 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:47.741 Test: blockdev writev readv 8 blocks ...passed 00:08:47.741 Test: blockdev writev readv 30 x 1block ...passed 00:08:47.741 Test: blockdev writev readv block ...passed 00:08:47.741 Test: blockdev writev readv size > 128k ...passed 00:08:47.741 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:47.741 Test: blockdev comparev and writev ...[2024-12-09 05:03:24.279827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:47.741 [2024-12-09 05:03:24.279862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:08:47.741 [2024-12-09 05:03:24.279876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:47.741 [2024-12-09 05:03:24.279883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:08:47.741 [2024-12-09 05:03:24.280137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:47.741 [2024-12-09 05:03:24.280149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:08:47.741 [2024-12-09 05:03:24.280160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:47.741 [2024-12-09 05:03:24.280167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:08:47.741 [2024-12-09 05:03:24.280415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:47.741 [2024-12-09 05:03:24.280427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:08:47.741 [2024-12-09 05:03:24.280438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:47.742 [2024-12-09 05:03:24.280445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:08:47.742 [2024-12-09 05:03:24.280687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:47.742 [2024-12-09 05:03:24.280698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:08:47.742 [2024-12-09 05:03:24.280710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:47.742 [2024-12-09 05:03:24.280721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:08:47.742 passed 00:08:47.742 Test: blockdev nvme passthru rw ...passed 00:08:47.742 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:03:24.362277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:47.742 [2024-12-09 05:03:24.362296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:08:47.742 [2024-12-09 05:03:24.362418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:47.742 [2024-12-09 05:03:24.362429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:08:47.742 [2024-12-09 05:03:24.362545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:47.742 [2024-12-09 05:03:24.362555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:08:47.742 [2024-12-09 05:03:24.362671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:47.742 [2024-12-09 05:03:24.362681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:08:47.742 passed 00:08:47.742 Test: blockdev nvme admin passthru ...passed 00:08:48.000 Test: blockdev copy ...passed 00:08:48.000 00:08:48.000 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.000 suites 1 1 n/a 0 0 00:08:48.000 tests 23 23 23 0 0 00:08:48.000 asserts 152 152 152 0 n/a 00:08:48.000 00:08:48.000 Elapsed time = 1.382 seconds 00:08:48.000 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:48.000 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.000 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:48.000 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.000 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:08:48.000 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:08:48.000 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:48.000 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:08:48.000 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:48.000 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:08:48.000 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.000 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:48.000 rmmod nvme_tcp 00:08:48.000 rmmod nvme_fabrics 00:08:48.000 rmmod nvme_keyring 00:08:48.258 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.258 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:08:48.258 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:08:48.258 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3474945 ']' 00:08:48.258 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3474945 00:08:48.258 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3474945 ']' 00:08:48.258 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3474945 00:08:48.258 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:08:48.258 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.258 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3474945 00:08:48.258 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:08:48.258 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:08:48.258 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3474945' 00:08:48.258 killing process with pid 3474945 00:08:48.258 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3474945 00:08:48.258 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3474945 00:08:48.517 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:48.517 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:48.517 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:48.517 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:08:48.517 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:48.517 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:08:48.517 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:08:48.517 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:48.517 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:48.517 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.517 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.517 05:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.421 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:50.421 00:08:50.421 real 0m10.130s 00:08:50.421 user 0m11.821s 00:08:50.421 sys 0m4.945s 00:08:50.421 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.421 05:03:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:50.421 ************************************ 00:08:50.421 END TEST nvmf_bdevio 00:08:50.421 ************************************ 00:08:50.421 05:03:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:50.421 00:08:50.421 real 4m30.676s 00:08:50.421 user 10m19.968s 00:08:50.421 sys 1m33.548s 00:08:50.421 05:03:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.421 05:03:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:50.421 ************************************ 00:08:50.421 END TEST nvmf_target_core 00:08:50.421 ************************************ 00:08:50.682 05:03:27 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:50.682 05:03:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:50.682 05:03:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.682 05:03:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:50.682 ************************************ 00:08:50.682 START TEST nvmf_target_extra 00:08:50.682 ************************************ 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:50.682 * Looking for test storage... 00:08:50.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:50.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.682 --rc genhtml_branch_coverage=1 00:08:50.682 --rc genhtml_function_coverage=1 00:08:50.682 --rc genhtml_legend=1 00:08:50.682 --rc geninfo_all_blocks=1 00:08:50.682 --rc geninfo_unexecuted_blocks=1 00:08:50.682 00:08:50.682 ' 00:08:50.682 05:03:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:50.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.682 --rc genhtml_branch_coverage=1 00:08:50.683 --rc genhtml_function_coverage=1 00:08:50.683 --rc genhtml_legend=1 00:08:50.683 --rc geninfo_all_blocks=1 00:08:50.683 --rc geninfo_unexecuted_blocks=1 00:08:50.683 00:08:50.683 ' 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:50.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.683 --rc genhtml_branch_coverage=1 00:08:50.683 --rc genhtml_function_coverage=1 00:08:50.683 --rc genhtml_legend=1 00:08:50.683 --rc geninfo_all_blocks=1 00:08:50.683 --rc geninfo_unexecuted_blocks=1 00:08:50.683 00:08:50.683 ' 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:50.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.683 --rc genhtml_branch_coverage=1 00:08:50.683 --rc genhtml_function_coverage=1 00:08:50.683 --rc genhtml_legend=1 00:08:50.683 --rc geninfo_all_blocks=1 00:08:50.683 --rc geninfo_unexecuted_blocks=1 00:08:50.683 00:08:50.683 ' 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:50.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.683 05:03:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:08:50.943 ************************************ 00:08:50.943 START TEST nvmf_example 00:08:50.943 ************************************ 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:50.943 * Looking for test storage... 00:08:50.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:50.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.943 --rc genhtml_branch_coverage=1 00:08:50.943 --rc genhtml_function_coverage=1 00:08:50.943 --rc genhtml_legend=1 00:08:50.943 --rc geninfo_all_blocks=1 00:08:50.943 --rc geninfo_unexecuted_blocks=1 00:08:50.943 00:08:50.943 ' 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:50.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.943 --rc genhtml_branch_coverage=1 00:08:50.943 --rc genhtml_function_coverage=1 00:08:50.943 --rc genhtml_legend=1 00:08:50.943 --rc geninfo_all_blocks=1 00:08:50.943 --rc geninfo_unexecuted_blocks=1 00:08:50.943 00:08:50.943 ' 00:08:50.943 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:50.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.944 --rc genhtml_branch_coverage=1 00:08:50.944 --rc genhtml_function_coverage=1 00:08:50.944 --rc genhtml_legend=1 00:08:50.944 --rc geninfo_all_blocks=1 00:08:50.944 --rc geninfo_unexecuted_blocks=1 00:08:50.944 00:08:50.944 ' 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:50.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.944 --rc genhtml_branch_coverage=1 00:08:50.944 --rc genhtml_function_coverage=1 00:08:50.944 --rc genhtml_legend=1 00:08:50.944 --rc geninfo_all_blocks=1 00:08:50.944 --rc geninfo_unexecuted_blocks=1 00:08:50.944 00:08:50.944 ' 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:50.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:50.944 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:50.945 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:08:50.945 05:03:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:56.222 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:56.222 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:56.222 Found net devices under 0000:86:00.0: cvl_0_0 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:56.222 Found net devices under 0000:86:00.1: cvl_0_1 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:56.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:08:56.222 00:08:56.222 --- 10.0.0.2 ping statistics --- 00:08:56.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.222 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:08:56.222 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:56.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:08:56.222 00:08:56.223 --- 10.0.0.1 ping statistics --- 00:08:56.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.223 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3478866 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3478866 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3478866 ']' 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.223 05:03:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:57.159 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.159 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:08:57.159 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:57.159 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:57.159 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:57.159 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:57.159 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.159 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:57.159 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.159 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:57.159 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.159 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:57.417 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.417 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:57.417 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:57.417 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.417 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:57.417 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.417 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:57.417 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:57.417 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.417 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:57.417 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.417 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:57.417 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.417 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:57.417 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.417 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:57.417 05:03:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:07.540 Initializing NVMe Controllers 00:09:07.540 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:07.540 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:07.540 Initialization complete. Launching workers. 00:09:07.540 ======================================================== 00:09:07.540 Latency(us) 00:09:07.540 Device Information : IOPS MiB/s Average min max 00:09:07.540 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17803.06 69.54 3594.36 699.68 15447.39 00:09:07.540 ======================================================== 00:09:07.541 Total : 17803.06 69.54 3594.36 699.68 15447.39 00:09:07.541 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:07.798 rmmod nvme_tcp 00:09:07.798 rmmod nvme_fabrics 00:09:07.798 rmmod nvme_keyring 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3478866 ']' 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3478866 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3478866 ']' 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3478866 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3478866 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3478866' 00:09:07.798 killing process with pid 3478866 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3478866 00:09:07.798 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3478866 00:09:08.057 nvmf threads initialize successfully 00:09:08.057 bdev subsystem init successfully 00:09:08.057 created a nvmf target service 00:09:08.057 create targets's poll groups done 00:09:08.057 all subsystems of target started 00:09:08.057 nvmf target is running 00:09:08.057 all subsystems of target stopped 00:09:08.057 destroy targets's poll groups done 00:09:08.057 destroyed the nvmf target service 00:09:08.057 bdev subsystem finish successfully 00:09:08.057 nvmf threads destroy successfully 00:09:08.057 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:08.057 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:08.057 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:08.057 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:08.057 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:08.057 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:08.057 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:08.057 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.057 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:08.057 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.057 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.057 05:03:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.595 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:10.596 00:09:10.596 real 0m19.360s 00:09:10.596 user 0m46.511s 00:09:10.596 sys 0m5.718s 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:10.596 ************************************ 00:09:10.596 END TEST nvmf_example 00:09:10.596 ************************************ 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:10.596 ************************************ 00:09:10.596 START TEST nvmf_filesystem 00:09:10.596 ************************************ 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:10.596 * Looking for test storage... 00:09:10.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:10.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.596 --rc genhtml_branch_coverage=1 00:09:10.596 --rc genhtml_function_coverage=1 00:09:10.596 --rc genhtml_legend=1 00:09:10.596 --rc geninfo_all_blocks=1 00:09:10.596 --rc geninfo_unexecuted_blocks=1 00:09:10.596 00:09:10.596 ' 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:10.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.596 --rc genhtml_branch_coverage=1 00:09:10.596 --rc genhtml_function_coverage=1 00:09:10.596 --rc genhtml_legend=1 00:09:10.596 --rc geninfo_all_blocks=1 00:09:10.596 --rc geninfo_unexecuted_blocks=1 00:09:10.596 00:09:10.596 ' 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:10.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.596 --rc genhtml_branch_coverage=1 00:09:10.596 --rc genhtml_function_coverage=1 00:09:10.596 --rc genhtml_legend=1 00:09:10.596 --rc geninfo_all_blocks=1 00:09:10.596 --rc geninfo_unexecuted_blocks=1 00:09:10.596 00:09:10.596 ' 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:10.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.596 --rc genhtml_branch_coverage=1 00:09:10.596 --rc genhtml_function_coverage=1 00:09:10.596 --rc genhtml_legend=1 00:09:10.596 --rc geninfo_all_blocks=1 00:09:10.596 --rc geninfo_unexecuted_blocks=1 00:09:10.596 00:09:10.596 ' 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:10.596 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:10.596 #define SPDK_CONFIG_H 00:09:10.596 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:10.596 #define SPDK_CONFIG_APPS 1 00:09:10.596 #define SPDK_CONFIG_ARCH native 00:09:10.596 #undef SPDK_CONFIG_ASAN 00:09:10.596 #undef SPDK_CONFIG_AVAHI 00:09:10.596 #undef SPDK_CONFIG_CET 00:09:10.596 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:10.596 #define SPDK_CONFIG_COVERAGE 1 00:09:10.596 #define SPDK_CONFIG_CROSS_PREFIX 00:09:10.596 #undef SPDK_CONFIG_CRYPTO 00:09:10.596 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:10.596 #undef SPDK_CONFIG_CUSTOMOCF 00:09:10.596 #undef SPDK_CONFIG_DAOS 00:09:10.596 #define SPDK_CONFIG_DAOS_DIR 00:09:10.596 #define SPDK_CONFIG_DEBUG 1 00:09:10.596 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:10.596 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:10.596 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:10.596 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:10.596 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:10.596 #undef SPDK_CONFIG_DPDK_UADK 00:09:10.597 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:10.597 #define SPDK_CONFIG_EXAMPLES 1 00:09:10.597 #undef SPDK_CONFIG_FC 00:09:10.597 #define SPDK_CONFIG_FC_PATH 00:09:10.597 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:10.597 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:10.597 #define SPDK_CONFIG_FSDEV 1 00:09:10.597 #undef SPDK_CONFIG_FUSE 00:09:10.597 #undef SPDK_CONFIG_FUZZER 00:09:10.597 #define SPDK_CONFIG_FUZZER_LIB 00:09:10.597 #undef SPDK_CONFIG_GOLANG 00:09:10.597 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:10.597 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:10.597 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:10.597 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:10.597 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:10.597 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:10.597 #undef SPDK_CONFIG_HAVE_LZ4 00:09:10.597 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:10.597 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:10.597 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:10.597 #define SPDK_CONFIG_IDXD 1 00:09:10.597 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:10.597 #undef SPDK_CONFIG_IPSEC_MB 00:09:10.597 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:10.597 #define SPDK_CONFIG_ISAL 1 00:09:10.597 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:10.597 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:10.597 #define SPDK_CONFIG_LIBDIR 00:09:10.597 #undef SPDK_CONFIG_LTO 00:09:10.597 #define SPDK_CONFIG_MAX_LCORES 128 00:09:10.597 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:10.597 #define SPDK_CONFIG_NVME_CUSE 1 00:09:10.597 #undef SPDK_CONFIG_OCF 00:09:10.597 #define SPDK_CONFIG_OCF_PATH 00:09:10.597 #define SPDK_CONFIG_OPENSSL_PATH 00:09:10.597 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:10.597 #define SPDK_CONFIG_PGO_DIR 00:09:10.597 #undef SPDK_CONFIG_PGO_USE 00:09:10.597 #define SPDK_CONFIG_PREFIX /usr/local 00:09:10.597 #undef SPDK_CONFIG_RAID5F 00:09:10.597 #undef SPDK_CONFIG_RBD 00:09:10.597 #define SPDK_CONFIG_RDMA 1 00:09:10.597 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:10.597 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:10.597 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:10.597 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:10.597 #define SPDK_CONFIG_SHARED 1 00:09:10.597 #undef SPDK_CONFIG_SMA 00:09:10.597 #define SPDK_CONFIG_TESTS 1 00:09:10.597 #undef SPDK_CONFIG_TSAN 00:09:10.597 #define SPDK_CONFIG_UBLK 1 00:09:10.597 #define SPDK_CONFIG_UBSAN 1 00:09:10.597 #undef SPDK_CONFIG_UNIT_TESTS 00:09:10.597 #undef SPDK_CONFIG_URING 00:09:10.597 #define SPDK_CONFIG_URING_PATH 00:09:10.597 #undef SPDK_CONFIG_URING_ZNS 00:09:10.597 #undef SPDK_CONFIG_USDT 00:09:10.597 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:10.597 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:10.597 #define SPDK_CONFIG_VFIO_USER 1 00:09:10.597 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:10.597 #define SPDK_CONFIG_VHOST 1 00:09:10.597 #define SPDK_CONFIG_VIRTIO 1 00:09:10.597 #undef SPDK_CONFIG_VTUNE 00:09:10.597 #define SPDK_CONFIG_VTUNE_DIR 00:09:10.597 #define SPDK_CONFIG_WERROR 1 00:09:10.597 #define SPDK_CONFIG_WPDK_DIR 00:09:10.597 #undef SPDK_CONFIG_XNVME 00:09:10.597 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:10.597 05:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:10.597 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3481376 ]] 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3481376 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.rGcCOh 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.rGcCOh/tests/target /tmp/spdk.rGcCOh 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189226676224 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963953152 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6737276928 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971945472 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981976576 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192793088 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981276160 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981976576 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=700416 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:10.598 * Looking for test storage... 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189226676224 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8951869440 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:10.598 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:10.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.599 --rc genhtml_branch_coverage=1 00:09:10.599 --rc genhtml_function_coverage=1 00:09:10.599 --rc genhtml_legend=1 00:09:10.599 --rc geninfo_all_blocks=1 00:09:10.599 --rc geninfo_unexecuted_blocks=1 00:09:10.599 00:09:10.599 ' 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:10.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.599 --rc genhtml_branch_coverage=1 00:09:10.599 --rc genhtml_function_coverage=1 00:09:10.599 --rc genhtml_legend=1 00:09:10.599 --rc geninfo_all_blocks=1 00:09:10.599 --rc geninfo_unexecuted_blocks=1 00:09:10.599 00:09:10.599 ' 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:10.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.599 --rc genhtml_branch_coverage=1 00:09:10.599 --rc genhtml_function_coverage=1 00:09:10.599 --rc genhtml_legend=1 00:09:10.599 --rc geninfo_all_blocks=1 00:09:10.599 --rc geninfo_unexecuted_blocks=1 00:09:10.599 00:09:10.599 ' 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:10.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.599 --rc genhtml_branch_coverage=1 00:09:10.599 --rc genhtml_function_coverage=1 00:09:10.599 --rc genhtml_legend=1 00:09:10.599 --rc geninfo_all_blocks=1 00:09:10.599 --rc geninfo_unexecuted_blocks=1 00:09:10.599 00:09:10.599 ' 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:10.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:10.599 05:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:17.168 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:17.168 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:17.168 Found net devices under 0000:86:00.0: cvl_0_0 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:17.168 Found net devices under 0000:86:00.1: cvl_0_1 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:17.168 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:17.169 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:17.169 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:17.169 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:17.169 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:17.169 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.169 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.169 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:17.169 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:17.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:09:17.169 00:09:17.169 --- 10.0.0.2 ping statistics --- 00:09:17.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.169 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:09:17.169 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:09:17.169 00:09:17.169 --- 10.0.0.1 ping statistics --- 00:09:17.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.169 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:09:17.169 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.169 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:17.169 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:17.169 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.169 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:17.169 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:17.169 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.169 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:17.169 05:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:17.169 ************************************ 00:09:17.169 START TEST nvmf_filesystem_no_in_capsule 00:09:17.169 ************************************ 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3484454 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3484454 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3484454 ']' 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.169 [2024-12-09 05:03:53.112441] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:09:17.169 [2024-12-09 05:03:53.112493] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.169 [2024-12-09 05:03:53.183290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.169 [2024-12-09 05:03:53.226964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.169 [2024-12-09 05:03:53.227008] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.169 [2024-12-09 05:03:53.227015] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.169 [2024-12-09 05:03:53.227022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.169 [2024-12-09 05:03:53.227027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.169 [2024-12-09 05:03:53.228633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.169 [2024-12-09 05:03:53.228733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.169 [2024-12-09 05:03:53.228830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.169 [2024-12-09 05:03:53.228832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.169 [2024-12-09 05:03:53.368240] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.169 Malloc1 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.169 [2024-12-09 05:03:53.527324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:17.169 { 00:09:17.169 "name": "Malloc1", 00:09:17.169 "aliases": [ 00:09:17.169 "82b8709d-f26f-4c25-bf88-f089a677497a" 00:09:17.169 ], 00:09:17.169 "product_name": "Malloc disk", 00:09:17.169 "block_size": 512, 00:09:17.169 "num_blocks": 1048576, 00:09:17.169 "uuid": "82b8709d-f26f-4c25-bf88-f089a677497a", 00:09:17.169 "assigned_rate_limits": { 00:09:17.169 "rw_ios_per_sec": 0, 00:09:17.169 "rw_mbytes_per_sec": 0, 00:09:17.169 "r_mbytes_per_sec": 0, 00:09:17.169 "w_mbytes_per_sec": 0 00:09:17.169 }, 00:09:17.169 "claimed": true, 00:09:17.169 "claim_type": "exclusive_write", 00:09:17.169 "zoned": false, 00:09:17.169 "supported_io_types": { 00:09:17.169 "read": true, 00:09:17.169 "write": true, 00:09:17.169 "unmap": true, 00:09:17.169 "flush": true, 00:09:17.169 "reset": true, 00:09:17.169 "nvme_admin": false, 00:09:17.169 "nvme_io": false, 00:09:17.169 "nvme_io_md": false, 00:09:17.169 "write_zeroes": true, 00:09:17.169 "zcopy": true, 00:09:17.169 "get_zone_info": false, 00:09:17.169 "zone_management": false, 00:09:17.169 "zone_append": false, 00:09:17.169 "compare": false, 00:09:17.169 "compare_and_write": false, 00:09:17.169 "abort": true, 00:09:17.169 "seek_hole": false, 00:09:17.169 "seek_data": false, 00:09:17.169 "copy": true, 00:09:17.169 "nvme_iov_md": false 00:09:17.169 }, 00:09:17.169 "memory_domains": [ 00:09:17.169 { 00:09:17.169 "dma_device_id": "system", 00:09:17.169 "dma_device_type": 1 00:09:17.169 }, 00:09:17.169 { 00:09:17.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.169 "dma_device_type": 2 00:09:17.169 } 00:09:17.169 ], 00:09:17.169 "driver_specific": {} 00:09:17.169 } 00:09:17.169 ]' 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:17.169 05:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:18.545 05:03:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:18.545 05:03:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:18.545 05:03:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.545 05:03:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:18.545 05:03:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:20.461 05:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:20.462 05:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:20.462 05:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.462 05:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:20.462 05:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.462 05:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:20.462 05:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:20.462 05:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:20.462 05:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:20.462 05:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:20.462 05:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:20.462 05:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:20.462 05:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:20.462 05:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:20.462 05:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:20.462 05:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:20.462 05:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:20.720 05:03:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:20.978 05:03:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:21.915 05:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:21.915 05:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:21.915 05:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:21.915 05:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.915 05:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.915 ************************************ 00:09:21.915 START TEST filesystem_ext4 00:09:21.915 ************************************ 00:09:21.915 05:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:21.915 05:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:21.915 05:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:21.915 05:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:21.915 05:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:21.915 05:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:21.915 05:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:21.915 05:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:21.915 05:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:21.915 05:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:21.915 05:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:21.915 mke2fs 1.47.0 (5-Feb-2023) 00:09:22.175 Discarding device blocks: 0/522240 done 00:09:22.175 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:22.175 Filesystem UUID: 448f83c5-bcef-4779-9a34-cb62daf41285 00:09:22.175 Superblock backups stored on blocks: 00:09:22.175 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:22.175 00:09:22.175 Allocating group tables: 0/64 done 00:09:22.175 Writing inode tables: 0/64 done 00:09:22.175 Creating journal (8192 blocks): done 00:09:22.175 Writing superblocks and filesystem accounting information: 0/64 done 00:09:22.175 00:09:22.175 05:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:22.175 05:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:27.447 05:04:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:27.447 05:04:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:27.447 05:04:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:27.447 05:04:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:27.447 05:04:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:27.447 05:04:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:27.447 05:04:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3484454 00:09:27.447 05:04:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:27.447 05:04:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:27.447 05:04:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:27.447 05:04:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:27.447 00:09:27.447 real 0m5.472s 00:09:27.447 user 0m0.032s 00:09:27.447 sys 0m0.066s 00:09:27.447 05:04:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.447 05:04:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:27.447 ************************************ 00:09:27.447 END TEST filesystem_ext4 00:09:27.447 ************************************ 00:09:27.447 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:27.447 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:27.447 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.447 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:27.447 ************************************ 00:09:27.447 START TEST filesystem_btrfs 00:09:27.447 ************************************ 00:09:27.447 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:27.447 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:27.447 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:27.447 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:27.447 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:27.447 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:27.447 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:27.447 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:27.447 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:27.447 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:27.447 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:27.705 btrfs-progs v6.8.1 00:09:27.705 See https://btrfs.readthedocs.io for more information. 00:09:27.705 00:09:27.705 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:27.705 NOTE: several default settings have changed in version 5.15, please make sure 00:09:27.705 this does not affect your deployments: 00:09:27.705 - DUP for metadata (-m dup) 00:09:27.705 - enabled no-holes (-O no-holes) 00:09:27.705 - enabled free-space-tree (-R free-space-tree) 00:09:27.705 00:09:27.705 Label: (null) 00:09:27.705 UUID: 584fcbd9-b94f-4d8e-ad0d-bf8b460585d7 00:09:27.705 Node size: 16384 00:09:27.705 Sector size: 4096 (CPU page size: 4096) 00:09:27.705 Filesystem size: 510.00MiB 00:09:27.705 Block group profiles: 00:09:27.705 Data: single 8.00MiB 00:09:27.705 Metadata: DUP 32.00MiB 00:09:27.705 System: DUP 8.00MiB 00:09:27.705 SSD detected: yes 00:09:27.705 Zoned device: no 00:09:27.705 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:27.705 Checksum: crc32c 00:09:27.705 Number of devices: 1 00:09:27.705 Devices: 00:09:27.705 ID SIZE PATH 00:09:27.705 1 510.00MiB /dev/nvme0n1p1 00:09:27.705 00:09:27.705 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:27.705 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3484454 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:27.963 00:09:27.963 real 0m0.456s 00:09:27.963 user 0m0.023s 00:09:27.963 sys 0m0.117s 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:27.963 ************************************ 00:09:27.963 END TEST filesystem_btrfs 00:09:27.963 ************************************ 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:27.963 ************************************ 00:09:27.963 START TEST filesystem_xfs 00:09:27.963 ************************************ 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:27.963 05:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:28.221 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:28.221 = sectsz=512 attr=2, projid32bit=1 00:09:28.221 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:28.221 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:28.221 data = bsize=4096 blocks=130560, imaxpct=25 00:09:28.221 = sunit=0 swidth=0 blks 00:09:28.221 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:28.221 log =internal log bsize=4096 blocks=16384, version=2 00:09:28.221 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:28.221 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:29.152 Discarding blocks...Done. 00:09:29.152 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:29.152 05:04:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:31.048 05:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:31.048 05:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:31.048 05:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:31.048 05:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:31.048 05:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:31.048 05:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:31.048 05:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3484454 00:09:31.049 05:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:31.049 05:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:31.049 05:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:31.049 05:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:31.049 00:09:31.049 real 0m3.103s 00:09:31.049 user 0m0.022s 00:09:31.049 sys 0m0.077s 00:09:31.049 05:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.049 05:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:31.049 ************************************ 00:09:31.049 END TEST filesystem_xfs 00:09:31.049 ************************************ 00:09:31.306 05:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:31.565 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:31.565 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:31.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.565 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:31.565 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:31.565 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:31.565 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.565 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:31.565 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.565 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:31.565 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:31.565 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.565 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:31.565 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.565 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:31.565 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3484454 00:09:31.565 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3484454 ']' 00:09:31.565 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3484454 00:09:31.565 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:31.565 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.565 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3484454 00:09:31.824 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.824 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.824 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3484454' 00:09:31.824 killing process with pid 3484454 00:09:31.824 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3484454 00:09:31.824 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3484454 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:32.084 00:09:32.084 real 0m15.554s 00:09:32.084 user 1m1.019s 00:09:32.084 sys 0m1.397s 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:32.084 ************************************ 00:09:32.084 END TEST nvmf_filesystem_no_in_capsule 00:09:32.084 ************************************ 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:32.084 ************************************ 00:09:32.084 START TEST nvmf_filesystem_in_capsule 00:09:32.084 ************************************ 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3487214 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3487214 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3487214 ']' 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.084 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:32.343 [2024-12-09 05:04:08.739224] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:09:32.343 [2024-12-09 05:04:08.739268] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.344 [2024-12-09 05:04:08.809088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.344 [2024-12-09 05:04:08.848417] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.344 [2024-12-09 05:04:08.848458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.344 [2024-12-09 05:04:08.848465] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.344 [2024-12-09 05:04:08.848471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.344 [2024-12-09 05:04:08.848476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.344 [2024-12-09 05:04:08.850081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.344 [2024-12-09 05:04:08.850177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.344 [2024-12-09 05:04:08.850284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.344 [2024-12-09 05:04:08.850286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.344 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.344 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:32.344 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:32.344 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:32.344 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:32.603 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.603 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:32.603 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:32.603 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.603 05:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:32.603 [2024-12-09 05:04:09.001718] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:32.603 Malloc1 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:32.603 [2024-12-09 05:04:09.165218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.603 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:32.603 { 00:09:32.603 "name": "Malloc1", 00:09:32.603 "aliases": [ 00:09:32.603 "8786ab0d-5024-420d-80f0-1c2765176aaf" 00:09:32.603 ], 00:09:32.603 "product_name": "Malloc disk", 00:09:32.603 "block_size": 512, 00:09:32.603 "num_blocks": 1048576, 00:09:32.603 "uuid": "8786ab0d-5024-420d-80f0-1c2765176aaf", 00:09:32.603 "assigned_rate_limits": { 00:09:32.603 "rw_ios_per_sec": 0, 00:09:32.603 "rw_mbytes_per_sec": 0, 00:09:32.603 "r_mbytes_per_sec": 0, 00:09:32.603 "w_mbytes_per_sec": 0 00:09:32.603 }, 00:09:32.603 "claimed": true, 00:09:32.603 "claim_type": "exclusive_write", 00:09:32.603 "zoned": false, 00:09:32.603 "supported_io_types": { 00:09:32.603 "read": true, 00:09:32.603 "write": true, 00:09:32.603 "unmap": true, 00:09:32.603 "flush": true, 00:09:32.603 "reset": true, 00:09:32.603 "nvme_admin": false, 00:09:32.603 "nvme_io": false, 00:09:32.603 "nvme_io_md": false, 00:09:32.603 "write_zeroes": true, 00:09:32.603 "zcopy": true, 00:09:32.603 "get_zone_info": false, 00:09:32.603 "zone_management": false, 00:09:32.603 "zone_append": false, 00:09:32.603 "compare": false, 00:09:32.603 "compare_and_write": false, 00:09:32.603 "abort": true, 00:09:32.603 "seek_hole": false, 00:09:32.603 "seek_data": false, 00:09:32.603 "copy": true, 00:09:32.603 "nvme_iov_md": false 00:09:32.603 }, 00:09:32.603 "memory_domains": [ 00:09:32.604 { 00:09:32.604 "dma_device_id": "system", 00:09:32.604 "dma_device_type": 1 00:09:32.604 }, 00:09:32.604 { 00:09:32.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.604 "dma_device_type": 2 00:09:32.604 } 00:09:32.604 ], 00:09:32.604 "driver_specific": {} 00:09:32.604 } 00:09:32.604 ]' 00:09:32.604 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:32.604 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:32.604 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:32.862 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:32.862 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:32.862 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:32.862 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:32.862 05:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:34.398 05:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:34.398 05:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:34.398 05:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:34.398 05:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:34.398 05:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:36.309 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:36.309 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:36.309 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:36.309 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:36.309 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:36.309 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:36.309 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:36.309 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:36.309 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:36.309 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:36.309 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:36.309 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:36.309 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:36.309 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:36.310 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:36.310 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:36.310 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:36.310 05:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:36.568 05:04:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:37.940 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:37.940 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:37.940 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:37.940 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.940 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:37.940 ************************************ 00:09:37.940 START TEST filesystem_in_capsule_ext4 00:09:37.940 ************************************ 00:09:37.940 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:37.940 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:37.941 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:37.941 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:37.941 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:37.941 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:37.941 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:37.941 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:37.941 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:37.941 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:37.941 05:04:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:37.941 mke2fs 1.47.0 (5-Feb-2023) 00:09:37.941 Discarding device blocks: 0/522240 done 00:09:37.941 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:37.941 Filesystem UUID: 8a4fe337-f4e8-46b0-8de8-deb53e41529e 00:09:37.941 Superblock backups stored on blocks: 00:09:37.941 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:37.941 00:09:37.941 Allocating group tables: 0/64 done 00:09:37.941 Writing inode tables: 0/64 done 00:09:37.941 Creating journal (8192 blocks): done 00:09:40.134 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:09:40.134 00:09:40.134 05:04:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:40.134 05:04:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:46.694 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:46.694 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:46.694 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:46.694 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:46.694 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:46.694 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:46.694 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3487214 00:09:46.694 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:46.694 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:46.694 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:46.694 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:46.694 00:09:46.694 real 0m8.356s 00:09:46.694 user 0m0.028s 00:09:46.694 sys 0m0.071s 00:09:46.694 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.694 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:46.694 ************************************ 00:09:46.694 END TEST filesystem_in_capsule_ext4 00:09:46.694 ************************************ 00:09:46.694 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:46.694 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:46.695 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.695 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.695 ************************************ 00:09:46.695 START TEST filesystem_in_capsule_btrfs 00:09:46.695 ************************************ 00:09:46.695 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:46.695 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:46.695 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:46.695 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:46.695 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:46.695 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:46.695 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:46.695 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:46.695 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:46.695 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:46.695 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:46.695 btrfs-progs v6.8.1 00:09:46.695 See https://btrfs.readthedocs.io for more information. 00:09:46.695 00:09:46.695 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:46.695 NOTE: several default settings have changed in version 5.15, please make sure 00:09:46.695 this does not affect your deployments: 00:09:46.695 - DUP for metadata (-m dup) 00:09:46.695 - enabled no-holes (-O no-holes) 00:09:46.695 - enabled free-space-tree (-R free-space-tree) 00:09:46.695 00:09:46.695 Label: (null) 00:09:46.695 UUID: bb611c0c-f42d-4ea7-bc1a-69ba496d6e43 00:09:46.695 Node size: 16384 00:09:46.695 Sector size: 4096 (CPU page size: 4096) 00:09:46.695 Filesystem size: 510.00MiB 00:09:46.695 Block group profiles: 00:09:46.695 Data: single 8.00MiB 00:09:46.695 Metadata: DUP 32.00MiB 00:09:46.695 System: DUP 8.00MiB 00:09:46.695 SSD detected: yes 00:09:46.695 Zoned device: no 00:09:46.695 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:46.695 Checksum: crc32c 00:09:46.695 Number of devices: 1 00:09:46.695 Devices: 00:09:46.695 ID SIZE PATH 00:09:46.695 1 510.00MiB /dev/nvme0n1p1 00:09:46.695 00:09:46.695 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:46.695 05:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3487214 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:46.695 00:09:46.695 real 0m0.502s 00:09:46.695 user 0m0.027s 00:09:46.695 sys 0m0.113s 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:46.695 ************************************ 00:09:46.695 END TEST filesystem_in_capsule_btrfs 00:09:46.695 ************************************ 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.695 ************************************ 00:09:46.695 START TEST filesystem_in_capsule_xfs 00:09:46.695 ************************************ 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:46.695 05:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:46.695 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:46.695 = sectsz=512 attr=2, projid32bit=1 00:09:46.695 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:46.695 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:46.695 data = bsize=4096 blocks=130560, imaxpct=25 00:09:46.695 = sunit=0 swidth=0 blks 00:09:46.695 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:46.695 log =internal log bsize=4096 blocks=16384, version=2 00:09:46.695 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:46.695 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:48.068 Discarding blocks...Done. 00:09:48.068 05:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:48.068 05:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:49.440 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:49.440 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:49.440 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:49.440 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:49.441 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:49.441 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:49.698 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3487214 00:09:49.698 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:49.698 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:49.698 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:49.698 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:49.698 00:09:49.698 real 0m2.888s 00:09:49.698 user 0m0.023s 00:09:49.698 sys 0m0.076s 00:09:49.698 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.698 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:49.698 ************************************ 00:09:49.698 END TEST filesystem_in_capsule_xfs 00:09:49.698 ************************************ 00:09:49.698 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:49.698 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:49.698 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:49.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3487214 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3487214 ']' 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3487214 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3487214 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3487214' 00:09:49.957 killing process with pid 3487214 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3487214 00:09:49.957 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3487214 00:09:50.226 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:50.226 00:09:50.226 real 0m18.111s 00:09:50.226 user 1m11.210s 00:09:50.226 sys 0m1.440s 00:09:50.226 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.226 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:50.226 ************************************ 00:09:50.226 END TEST nvmf_filesystem_in_capsule 00:09:50.226 ************************************ 00:09:50.226 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:50.226 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:50.226 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:09:50.226 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:50.226 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:09:50.226 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:50.226 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:50.226 rmmod nvme_tcp 00:09:50.226 rmmod nvme_fabrics 00:09:50.506 rmmod nvme_keyring 00:09:50.506 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:50.506 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:09:50.506 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:09:50.506 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:50.506 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:50.506 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:50.506 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:50.506 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:09:50.506 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:09:50.506 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:50.506 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:09:50.506 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:50.506 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:50.506 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.506 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.506 05:04:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.408 05:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:52.408 00:09:52.408 real 0m42.217s 00:09:52.408 user 2m14.180s 00:09:52.408 sys 0m7.447s 00:09:52.408 05:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.408 05:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:52.408 ************************************ 00:09:52.408 END TEST nvmf_filesystem 00:09:52.408 ************************************ 00:09:52.408 05:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:52.408 05:04:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:52.408 05:04:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.408 05:04:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:52.408 ************************************ 00:09:52.408 START TEST nvmf_target_discovery 00:09:52.408 ************************************ 00:09:52.408 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:52.668 * Looking for test storage... 00:09:52.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:52.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.668 --rc genhtml_branch_coverage=1 00:09:52.668 --rc genhtml_function_coverage=1 00:09:52.668 --rc genhtml_legend=1 00:09:52.668 --rc geninfo_all_blocks=1 00:09:52.668 --rc geninfo_unexecuted_blocks=1 00:09:52.668 00:09:52.668 ' 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:52.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.668 --rc genhtml_branch_coverage=1 00:09:52.668 --rc genhtml_function_coverage=1 00:09:52.668 --rc genhtml_legend=1 00:09:52.668 --rc geninfo_all_blocks=1 00:09:52.668 --rc geninfo_unexecuted_blocks=1 00:09:52.668 00:09:52.668 ' 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:52.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.668 --rc genhtml_branch_coverage=1 00:09:52.668 --rc genhtml_function_coverage=1 00:09:52.668 --rc genhtml_legend=1 00:09:52.668 --rc geninfo_all_blocks=1 00:09:52.668 --rc geninfo_unexecuted_blocks=1 00:09:52.668 00:09:52.668 ' 00:09:52.668 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:52.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.668 --rc genhtml_branch_coverage=1 00:09:52.668 --rc genhtml_function_coverage=1 00:09:52.668 --rc genhtml_legend=1 00:09:52.668 --rc geninfo_all_blocks=1 00:09:52.669 --rc geninfo_unexecuted_blocks=1 00:09:52.669 00:09:52.669 ' 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:52.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:09:52.669 05:04:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:57.943 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.943 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:09:57.943 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:57.943 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:57.943 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:57.943 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:57.943 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:57.943 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:09:57.943 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:57.943 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:09:57.943 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:09:57.943 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:09:57.943 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:09:57.943 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:09:57.943 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:09:57.943 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.943 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.943 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.943 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.943 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.943 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:57.944 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:57.944 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:57.944 Found net devices under 0000:86:00.0: cvl_0_0 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:57.944 Found net devices under 0000:86:00.1: cvl_0_1 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.944 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:58.202 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:58.202 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:58.202 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:58.202 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:58.202 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:58.202 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:58.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:09:58.202 00:09:58.202 --- 10.0.0.2 ping statistics --- 00:09:58.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.203 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:58.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:09:58.203 00:09:58.203 --- 10.0.0.1 ping statistics --- 00:09:58.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.203 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3493944 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3493944 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3493944 ']' 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.203 05:04:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.203 [2024-12-09 05:04:34.786282] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:09:58.203 [2024-12-09 05:04:34.786331] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.461 [2024-12-09 05:04:34.854778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:58.461 [2024-12-09 05:04:34.895584] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.461 [2024-12-09 05:04:34.895624] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.461 [2024-12-09 05:04:34.895632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.461 [2024-12-09 05:04:34.895638] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.461 [2024-12-09 05:04:34.895644] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.461 [2024-12-09 05:04:34.897249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.461 [2024-12-09 05:04:34.897346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.461 [2024-12-09 05:04:34.897412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:58.461 [2024-12-09 05:04:34.897414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.461 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.461 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:09:58.461 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:58.461 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:58.461 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.461 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.461 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:58.461 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.461 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.461 [2024-12-09 05:04:35.044295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.461 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.461 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:58.461 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:58.461 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:58.461 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.461 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.461 Null1 00:09:58.461 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.461 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:58.461 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.461 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.461 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.461 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:58.462 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.462 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.462 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.462 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:58.462 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.462 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.462 [2024-12-09 05:04:35.100159] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:58.462 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.462 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:58.462 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.719 Null2 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.719 Null3 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.719 Null4 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.719 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:09:58.977 00:09:58.977 Discovery Log Number of Records 6, Generation counter 6 00:09:58.977 =====Discovery Log Entry 0====== 00:09:58.977 trtype: tcp 00:09:58.977 adrfam: ipv4 00:09:58.977 subtype: current discovery subsystem 00:09:58.977 treq: not required 00:09:58.977 portid: 0 00:09:58.977 trsvcid: 4420 00:09:58.977 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:58.977 traddr: 10.0.0.2 00:09:58.977 eflags: explicit discovery connections, duplicate discovery information 00:09:58.977 sectype: none 00:09:58.977 =====Discovery Log Entry 1====== 00:09:58.977 trtype: tcp 00:09:58.977 adrfam: ipv4 00:09:58.977 subtype: nvme subsystem 00:09:58.977 treq: not required 00:09:58.977 portid: 0 00:09:58.977 trsvcid: 4420 00:09:58.977 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:58.977 traddr: 10.0.0.2 00:09:58.977 eflags: none 00:09:58.977 sectype: none 00:09:58.977 =====Discovery Log Entry 2====== 00:09:58.977 trtype: tcp 00:09:58.977 adrfam: ipv4 00:09:58.977 subtype: nvme subsystem 00:09:58.977 treq: not required 00:09:58.977 portid: 0 00:09:58.977 trsvcid: 4420 00:09:58.977 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:58.977 traddr: 10.0.0.2 00:09:58.977 eflags: none 00:09:58.977 sectype: none 00:09:58.977 =====Discovery Log Entry 3====== 00:09:58.977 trtype: tcp 00:09:58.977 adrfam: ipv4 00:09:58.977 subtype: nvme subsystem 00:09:58.977 treq: not required 00:09:58.977 portid: 0 00:09:58.977 trsvcid: 4420 00:09:58.977 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:58.977 traddr: 10.0.0.2 00:09:58.977 eflags: none 00:09:58.977 sectype: none 00:09:58.977 =====Discovery Log Entry 4====== 00:09:58.977 trtype: tcp 00:09:58.977 adrfam: ipv4 00:09:58.977 subtype: nvme subsystem 00:09:58.977 treq: not required 00:09:58.977 portid: 0 00:09:58.977 trsvcid: 4420 00:09:58.977 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:58.977 traddr: 10.0.0.2 00:09:58.977 eflags: none 00:09:58.977 sectype: none 00:09:58.977 =====Discovery Log Entry 5====== 00:09:58.977 trtype: tcp 00:09:58.977 adrfam: ipv4 00:09:58.977 subtype: discovery subsystem referral 00:09:58.977 treq: not required 00:09:58.977 portid: 0 00:09:58.977 trsvcid: 4430 00:09:58.977 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:58.977 traddr: 10.0.0.2 00:09:58.977 eflags: none 00:09:58.977 sectype: none 00:09:58.977 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:58.977 Perform nvmf subsystem discovery via RPC 00:09:58.977 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:58.977 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.977 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.977 [ 00:09:58.977 { 00:09:58.977 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:58.977 "subtype": "Discovery", 00:09:58.977 "listen_addresses": [ 00:09:58.977 { 00:09:58.977 "trtype": "TCP", 00:09:58.977 "adrfam": "IPv4", 00:09:58.977 "traddr": "10.0.0.2", 00:09:58.977 "trsvcid": "4420" 00:09:58.977 } 00:09:58.977 ], 00:09:58.977 "allow_any_host": true, 00:09:58.977 "hosts": [] 00:09:58.977 }, 00:09:58.977 { 00:09:58.977 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:58.977 "subtype": "NVMe", 00:09:58.977 "listen_addresses": [ 00:09:58.977 { 00:09:58.977 "trtype": "TCP", 00:09:58.977 "adrfam": "IPv4", 00:09:58.977 "traddr": "10.0.0.2", 00:09:58.977 "trsvcid": "4420" 00:09:58.977 } 00:09:58.977 ], 00:09:58.977 "allow_any_host": true, 00:09:58.977 "hosts": [], 00:09:58.977 "serial_number": "SPDK00000000000001", 00:09:58.977 "model_number": "SPDK bdev Controller", 00:09:58.977 "max_namespaces": 32, 00:09:58.977 "min_cntlid": 1, 00:09:58.977 "max_cntlid": 65519, 00:09:58.977 "namespaces": [ 00:09:58.977 { 00:09:58.977 "nsid": 1, 00:09:58.977 "bdev_name": "Null1", 00:09:58.977 "name": "Null1", 00:09:58.978 "nguid": "FB661BBA636A4961BBEBE7E1577F47FE", 00:09:58.978 "uuid": "fb661bba-636a-4961-bbeb-e7e1577f47fe" 00:09:58.978 } 00:09:58.978 ] 00:09:58.978 }, 00:09:58.978 { 00:09:58.978 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:58.978 "subtype": "NVMe", 00:09:58.978 "listen_addresses": [ 00:09:58.978 { 00:09:58.978 "trtype": "TCP", 00:09:58.978 "adrfam": "IPv4", 00:09:58.978 "traddr": "10.0.0.2", 00:09:58.978 "trsvcid": "4420" 00:09:58.978 } 00:09:58.978 ], 00:09:58.978 "allow_any_host": true, 00:09:58.978 "hosts": [], 00:09:58.978 "serial_number": "SPDK00000000000002", 00:09:58.978 "model_number": "SPDK bdev Controller", 00:09:58.978 "max_namespaces": 32, 00:09:58.978 "min_cntlid": 1, 00:09:58.978 "max_cntlid": 65519, 00:09:58.978 "namespaces": [ 00:09:58.978 { 00:09:58.978 "nsid": 1, 00:09:58.978 "bdev_name": "Null2", 00:09:58.978 "name": "Null2", 00:09:58.978 "nguid": "858BCBBCE86248238CE982521FE271D2", 00:09:58.978 "uuid": "858bcbbc-e862-4823-8ce9-82521fe271d2" 00:09:58.978 } 00:09:58.978 ] 00:09:58.978 }, 00:09:58.978 { 00:09:58.978 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:58.978 "subtype": "NVMe", 00:09:58.978 "listen_addresses": [ 00:09:58.978 { 00:09:58.978 "trtype": "TCP", 00:09:58.978 "adrfam": "IPv4", 00:09:58.978 "traddr": "10.0.0.2", 00:09:58.978 "trsvcid": "4420" 00:09:58.978 } 00:09:58.978 ], 00:09:58.978 "allow_any_host": true, 00:09:58.978 "hosts": [], 00:09:58.978 "serial_number": "SPDK00000000000003", 00:09:58.978 "model_number": "SPDK bdev Controller", 00:09:58.978 "max_namespaces": 32, 00:09:58.978 "min_cntlid": 1, 00:09:58.978 "max_cntlid": 65519, 00:09:58.978 "namespaces": [ 00:09:58.978 { 00:09:58.978 "nsid": 1, 00:09:58.978 "bdev_name": "Null3", 00:09:58.978 "name": "Null3", 00:09:58.978 "nguid": "0AFF081F4232420A86A40F61A5503719", 00:09:58.978 "uuid": "0aff081f-4232-420a-86a4-0f61a5503719" 00:09:58.978 } 00:09:58.978 ] 00:09:58.978 }, 00:09:58.978 { 00:09:58.978 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:58.978 "subtype": "NVMe", 00:09:58.978 "listen_addresses": [ 00:09:58.978 { 00:09:58.978 "trtype": "TCP", 00:09:58.978 "adrfam": "IPv4", 00:09:58.978 "traddr": "10.0.0.2", 00:09:58.978 "trsvcid": "4420" 00:09:58.978 } 00:09:58.978 ], 00:09:58.978 "allow_any_host": true, 00:09:58.978 "hosts": [], 00:09:58.978 "serial_number": "SPDK00000000000004", 00:09:58.978 "model_number": "SPDK bdev Controller", 00:09:58.978 "max_namespaces": 32, 00:09:58.978 "min_cntlid": 1, 00:09:58.978 "max_cntlid": 65519, 00:09:58.978 "namespaces": [ 00:09:58.978 { 00:09:58.978 "nsid": 1, 00:09:58.978 "bdev_name": "Null4", 00:09:58.978 "name": "Null4", 00:09:58.978 "nguid": "A998E3AEC9C14160BB34C3A0602D2B5A", 00:09:58.978 "uuid": "a998e3ae-c9c1-4160-bb34-c3a0602d2b5a" 00:09:58.978 } 00:09:58.978 ] 00:09:58.978 } 00:09:58.978 ] 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:58.978 rmmod nvme_tcp 00:09:58.978 rmmod nvme_fabrics 00:09:58.978 rmmod nvme_keyring 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3493944 ']' 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3493944 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3493944 ']' 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3493944 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.978 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3493944 00:09:59.237 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.237 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.237 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3493944' 00:09:59.237 killing process with pid 3493944 00:09:59.237 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3493944 00:09:59.237 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3493944 00:09:59.237 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:59.237 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:59.237 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:59.237 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:09:59.237 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:09:59.237 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:09:59.237 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:59.237 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:59.237 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:59.237 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.237 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.237 05:04:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.772 05:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:01.772 00:10:01.772 real 0m8.882s 00:10:01.772 user 0m5.464s 00:10:01.772 sys 0m4.463s 00:10:01.772 05:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.772 05:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.772 ************************************ 00:10:01.772 END TEST nvmf_target_discovery 00:10:01.772 ************************************ 00:10:01.772 05:04:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:01.772 05:04:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:01.772 05:04:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.772 05:04:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:01.772 ************************************ 00:10:01.772 START TEST nvmf_referrals 00:10:01.772 ************************************ 00:10:01.772 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:01.772 * Looking for test storage... 00:10:01.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:01.772 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:01.772 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:10:01.772 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:01.772 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:01.772 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.772 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.772 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:01.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.773 --rc genhtml_branch_coverage=1 00:10:01.773 --rc genhtml_function_coverage=1 00:10:01.773 --rc genhtml_legend=1 00:10:01.773 --rc geninfo_all_blocks=1 00:10:01.773 --rc geninfo_unexecuted_blocks=1 00:10:01.773 00:10:01.773 ' 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:01.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.773 --rc genhtml_branch_coverage=1 00:10:01.773 --rc genhtml_function_coverage=1 00:10:01.773 --rc genhtml_legend=1 00:10:01.773 --rc geninfo_all_blocks=1 00:10:01.773 --rc geninfo_unexecuted_blocks=1 00:10:01.773 00:10:01.773 ' 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:01.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.773 --rc genhtml_branch_coverage=1 00:10:01.773 --rc genhtml_function_coverage=1 00:10:01.773 --rc genhtml_legend=1 00:10:01.773 --rc geninfo_all_blocks=1 00:10:01.773 --rc geninfo_unexecuted_blocks=1 00:10:01.773 00:10:01.773 ' 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:01.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.773 --rc genhtml_branch_coverage=1 00:10:01.773 --rc genhtml_function_coverage=1 00:10:01.773 --rc genhtml_legend=1 00:10:01.773 --rc geninfo_all_blocks=1 00:10:01.773 --rc geninfo_unexecuted_blocks=1 00:10:01.773 00:10:01.773 ' 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:01.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.773 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:01.774 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:01.774 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:01.774 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.774 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.774 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.774 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:01.774 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:01.774 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:01.774 05:04:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:07.048 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:07.048 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:07.048 Found net devices under 0000:86:00.0: cvl_0_0 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.048 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:07.049 Found net devices under 0000:86:00.1: cvl_0_1 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.049 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.308 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.308 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.308 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:07.308 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.308 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.308 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.308 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:07.308 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:07.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:10:07.308 00:10:07.308 --- 10.0.0.2 ping statistics --- 00:10:07.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.308 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:10:07.308 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:10:07.308 00:10:07.308 --- 10.0.0.1 ping statistics --- 00:10:07.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.308 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:10:07.308 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.308 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:07.308 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:07.308 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.308 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:07.309 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:07.309 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.309 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:07.309 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:07.309 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:07.309 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:07.309 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:07.309 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.309 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3497717 00:10:07.309 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3497717 00:10:07.309 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:07.309 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3497717 ']' 00:10:07.309 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.309 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.309 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.309 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.309 05:04:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.567 [2024-12-09 05:04:44.002795] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:10:07.567 [2024-12-09 05:04:44.002851] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.567 [2024-12-09 05:04:44.073237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:07.567 [2024-12-09 05:04:44.115057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.567 [2024-12-09 05:04:44.115099] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.567 [2024-12-09 05:04:44.115106] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.567 [2024-12-09 05:04:44.115112] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.567 [2024-12-09 05:04:44.115117] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.567 [2024-12-09 05:04:44.116732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.567 [2024-12-09 05:04:44.116829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.567 [2024-12-09 05:04:44.116897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.567 [2024-12-09 05:04:44.116899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.826 [2024-12-09 05:04:44.264205] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.826 [2024-12-09 05:04:44.288160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:07.826 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:08.084 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:08.342 05:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:08.603 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:08.603 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:08.603 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:08.603 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:08.603 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:08.603 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:08.603 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:08.603 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:08.603 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:08.603 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:08.603 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:08.603 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:08.603 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:08.861 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:08.861 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:08.861 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.861 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:08.861 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.861 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:08.861 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:08.861 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:08.861 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:08.861 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.861 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:08.861 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:08.861 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.118 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:09.118 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:09.118 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:09.118 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:09.118 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:09.118 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:09.118 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:09.118 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:09.118 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:09.118 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:09.118 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:09.118 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:09.118 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:09.118 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:09.118 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:09.376 05:04:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:09.634 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:09.634 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:09.634 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:09.634 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:09.634 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:09.634 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:09.634 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:09.634 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:09.634 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.634 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:09.634 rmmod nvme_tcp 00:10:09.634 rmmod nvme_fabrics 00:10:09.635 rmmod nvme_keyring 00:10:09.635 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.635 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:09.635 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:09.635 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3497717 ']' 00:10:09.635 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3497717 00:10:09.635 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3497717 ']' 00:10:09.635 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3497717 00:10:09.635 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:09.635 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.635 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3497717 00:10:09.893 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.893 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.893 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3497717' 00:10:09.893 killing process with pid 3497717 00:10:09.893 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3497717 00:10:09.893 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3497717 00:10:09.893 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:09.893 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:09.893 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:09.893 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:09.893 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:09.893 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:09.893 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:09.893 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.893 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:09.893 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.893 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.893 05:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:12.426 00:10:12.426 real 0m10.566s 00:10:12.426 user 0m11.897s 00:10:12.426 sys 0m5.042s 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:12.426 ************************************ 00:10:12.426 END TEST nvmf_referrals 00:10:12.426 ************************************ 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:12.426 ************************************ 00:10:12.426 START TEST nvmf_connect_disconnect 00:10:12.426 ************************************ 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:12.426 * Looking for test storage... 00:10:12.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:12.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.426 --rc genhtml_branch_coverage=1 00:10:12.426 --rc genhtml_function_coverage=1 00:10:12.426 --rc genhtml_legend=1 00:10:12.426 --rc geninfo_all_blocks=1 00:10:12.426 --rc geninfo_unexecuted_blocks=1 00:10:12.426 00:10:12.426 ' 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:12.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.426 --rc genhtml_branch_coverage=1 00:10:12.426 --rc genhtml_function_coverage=1 00:10:12.426 --rc genhtml_legend=1 00:10:12.426 --rc geninfo_all_blocks=1 00:10:12.426 --rc geninfo_unexecuted_blocks=1 00:10:12.426 00:10:12.426 ' 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:12.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.426 --rc genhtml_branch_coverage=1 00:10:12.426 --rc genhtml_function_coverage=1 00:10:12.426 --rc genhtml_legend=1 00:10:12.426 --rc geninfo_all_blocks=1 00:10:12.426 --rc geninfo_unexecuted_blocks=1 00:10:12.426 00:10:12.426 ' 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:12.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.426 --rc genhtml_branch_coverage=1 00:10:12.426 --rc genhtml_function_coverage=1 00:10:12.426 --rc genhtml_legend=1 00:10:12.426 --rc geninfo_all_blocks=1 00:10:12.426 --rc geninfo_unexecuted_blocks=1 00:10:12.426 00:10:12.426 ' 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.426 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:12.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:12.427 05:04:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:17.701 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:17.701 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:17.702 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:17.702 Found net devices under 0000:86:00.0: cvl_0_0 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:17.702 Found net devices under 0000:86:00.1: cvl_0_1 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:17.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:10:17.702 00:10:17.702 --- 10.0.0.2 ping statistics --- 00:10:17.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.702 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:17.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:10:17.702 00:10:17.702 --- 10.0.0.1 ping statistics --- 00:10:17.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.702 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:10:17.702 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.703 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:17.703 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:17.703 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.703 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:17.703 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:17.703 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.703 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:17.703 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:17.962 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:17.962 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:17.962 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:17.962 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:17.962 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3501575 00:10:17.962 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3501575 00:10:17.962 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3501575 ']' 00:10:17.962 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.962 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.962 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.962 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:17.962 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.962 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:17.962 [2024-12-09 05:04:54.428122] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:10:17.962 [2024-12-09 05:04:54.428167] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.962 [2024-12-09 05:04:54.496215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:17.962 [2024-12-09 05:04:54.539732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.962 [2024-12-09 05:04:54.539768] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.962 [2024-12-09 05:04:54.539776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.962 [2024-12-09 05:04:54.539782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.962 [2024-12-09 05:04:54.539787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.962 [2024-12-09 05:04:54.541263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.962 [2024-12-09 05:04:54.541359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.962 [2024-12-09 05:04:54.541454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:17.962 [2024-12-09 05:04:54.541456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.221 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.221 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:18.221 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:18.221 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.221 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:18.221 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.221 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:18.221 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.221 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:18.221 [2024-12-09 05:04:54.681051] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.221 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.221 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:18.221 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.221 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:18.221 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.221 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:18.221 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:18.221 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.221 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:18.221 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.222 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:18.222 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.222 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:18.222 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.222 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.222 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.222 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:18.222 [2024-12-09 05:04:54.744948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.222 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.222 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:18.222 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:18.222 05:04:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:21.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:34.697 rmmod nvme_tcp 00:10:34.697 rmmod nvme_fabrics 00:10:34.697 rmmod nvme_keyring 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3501575 ']' 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3501575 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3501575 ']' 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3501575 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3501575 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3501575' 00:10:34.697 killing process with pid 3501575 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3501575 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3501575 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:34.697 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:10:34.954 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:10:34.954 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:34.954 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:10:34.954 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:34.954 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:34.954 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.954 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.954 05:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.857 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:36.857 00:10:36.857 real 0m24.771s 00:10:36.857 user 1m8.085s 00:10:36.857 sys 0m5.543s 00:10:36.857 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.857 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:36.857 ************************************ 00:10:36.857 END TEST nvmf_connect_disconnect 00:10:36.857 ************************************ 00:10:36.857 05:05:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:36.857 05:05:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:36.857 05:05:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.857 05:05:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:36.857 ************************************ 00:10:36.857 START TEST nvmf_multitarget 00:10:36.857 ************************************ 00:10:36.857 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:37.119 * Looking for test storage... 00:10:37.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.119 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:37.119 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:10:37.119 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:37.119 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:37.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.120 --rc genhtml_branch_coverage=1 00:10:37.120 --rc genhtml_function_coverage=1 00:10:37.120 --rc genhtml_legend=1 00:10:37.120 --rc geninfo_all_blocks=1 00:10:37.120 --rc geninfo_unexecuted_blocks=1 00:10:37.120 00:10:37.120 ' 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:37.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.120 --rc genhtml_branch_coverage=1 00:10:37.120 --rc genhtml_function_coverage=1 00:10:37.120 --rc genhtml_legend=1 00:10:37.120 --rc geninfo_all_blocks=1 00:10:37.120 --rc geninfo_unexecuted_blocks=1 00:10:37.120 00:10:37.120 ' 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:37.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.120 --rc genhtml_branch_coverage=1 00:10:37.120 --rc genhtml_function_coverage=1 00:10:37.120 --rc genhtml_legend=1 00:10:37.120 --rc geninfo_all_blocks=1 00:10:37.120 --rc geninfo_unexecuted_blocks=1 00:10:37.120 00:10:37.120 ' 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:37.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.120 --rc genhtml_branch_coverage=1 00:10:37.120 --rc genhtml_function_coverage=1 00:10:37.120 --rc genhtml_legend=1 00:10:37.120 --rc geninfo_all_blocks=1 00:10:37.120 --rc geninfo_unexecuted_blocks=1 00:10:37.120 00:10:37.120 ' 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.120 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.121 05:05:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:42.396 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:42.396 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.396 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:42.397 Found net devices under 0000:86:00.0: cvl_0_0 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:42.397 Found net devices under 0000:86:00.1: cvl_0_1 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:42.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:42.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:10:42.397 00:10:42.397 --- 10.0.0.2 ping statistics --- 00:10:42.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.397 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:42.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:42.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:10:42.397 00:10:42.397 --- 10.0.0.1 ping statistics --- 00:10:42.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.397 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3507864 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3507864 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3507864 ']' 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.397 05:05:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:42.397 [2024-12-09 05:05:18.849574] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:10:42.397 [2024-12-09 05:05:18.849621] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.397 [2024-12-09 05:05:18.920188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.397 [2024-12-09 05:05:18.963785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.397 [2024-12-09 05:05:18.963822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.397 [2024-12-09 05:05:18.963829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.397 [2024-12-09 05:05:18.963835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.397 [2024-12-09 05:05:18.963841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.397 [2024-12-09 05:05:18.965388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.397 [2024-12-09 05:05:18.965484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.397 [2024-12-09 05:05:18.965582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.397 [2024-12-09 05:05:18.965584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.656 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.656 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:10:42.656 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:42.656 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:42.656 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:42.656 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.656 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:42.656 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:42.656 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:42.656 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:42.656 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:42.915 "nvmf_tgt_1" 00:10:42.915 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:42.915 "nvmf_tgt_2" 00:10:42.915 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:42.915 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:42.915 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:42.915 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:43.174 true 00:10:43.174 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:43.174 true 00:10:43.174 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:43.174 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:43.432 rmmod nvme_tcp 00:10:43.432 rmmod nvme_fabrics 00:10:43.432 rmmod nvme_keyring 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3507864 ']' 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3507864 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3507864 ']' 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3507864 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3507864 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3507864' 00:10:43.432 killing process with pid 3507864 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3507864 00:10:43.432 05:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3507864 00:10:43.691 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:43.691 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:43.691 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:43.691 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:10:43.691 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:10:43.691 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:10:43.691 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:43.691 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:43.691 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:43.691 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.691 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.691 05:05:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:46.222 00:10:46.222 real 0m8.750s 00:10:46.222 user 0m6.811s 00:10:46.222 sys 0m4.330s 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:46.222 ************************************ 00:10:46.222 END TEST nvmf_multitarget 00:10:46.222 ************************************ 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:46.222 ************************************ 00:10:46.222 START TEST nvmf_rpc 00:10:46.222 ************************************ 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:46.222 * Looking for test storage... 00:10:46.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:10:46.222 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:46.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.223 --rc genhtml_branch_coverage=1 00:10:46.223 --rc genhtml_function_coverage=1 00:10:46.223 --rc genhtml_legend=1 00:10:46.223 --rc geninfo_all_blocks=1 00:10:46.223 --rc geninfo_unexecuted_blocks=1 00:10:46.223 00:10:46.223 ' 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:46.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.223 --rc genhtml_branch_coverage=1 00:10:46.223 --rc genhtml_function_coverage=1 00:10:46.223 --rc genhtml_legend=1 00:10:46.223 --rc geninfo_all_blocks=1 00:10:46.223 --rc geninfo_unexecuted_blocks=1 00:10:46.223 00:10:46.223 ' 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:46.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.223 --rc genhtml_branch_coverage=1 00:10:46.223 --rc genhtml_function_coverage=1 00:10:46.223 --rc genhtml_legend=1 00:10:46.223 --rc geninfo_all_blocks=1 00:10:46.223 --rc geninfo_unexecuted_blocks=1 00:10:46.223 00:10:46.223 ' 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:46.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.223 --rc genhtml_branch_coverage=1 00:10:46.223 --rc genhtml_function_coverage=1 00:10:46.223 --rc genhtml_legend=1 00:10:46.223 --rc geninfo_all_blocks=1 00:10:46.223 --rc geninfo_unexecuted_blocks=1 00:10:46.223 00:10:46.223 ' 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:10:46.223 05:05:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:51.540 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:51.540 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:51.540 Found net devices under 0000:86:00.0: cvl_0_0 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:51.540 Found net devices under 0000:86:00.1: cvl_0_1 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:10:51.540 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:51.541 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:51.541 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:51.541 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:51.541 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.541 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.541 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.541 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:51.541 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.541 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.541 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:51.541 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:51.541 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.541 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.541 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:51.541 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:51.541 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.541 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.800 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.800 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.800 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:51.800 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.800 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.800 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.800 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:51.800 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:51.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:10:51.800 00:10:51.800 --- 10.0.0.2 ping statistics --- 00:10:51.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.800 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:10:51.800 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:10:51.800 00:10:51.800 --- 10.0.0.1 ping statistics --- 00:10:51.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.800 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:10:51.800 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.800 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:10:51.800 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:51.800 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.800 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:51.800 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:51.800 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.800 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:51.800 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:51.800 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:51.801 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:51.801 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:51.801 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.059 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3511554 00:10:52.059 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:52.059 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3511554 00:10:52.059 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3511554 ']' 00:10:52.059 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.059 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.059 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.059 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.059 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.059 [2024-12-09 05:05:28.495621] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:10:52.059 [2024-12-09 05:05:28.495673] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.059 [2024-12-09 05:05:28.566242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.059 [2024-12-09 05:05:28.612142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.059 [2024-12-09 05:05:28.612173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.059 [2024-12-09 05:05:28.612181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.059 [2024-12-09 05:05:28.612187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.059 [2024-12-09 05:05:28.612192] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.059 [2024-12-09 05:05:28.613784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.059 [2024-12-09 05:05:28.613883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.059 [2024-12-09 05:05:28.613977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.059 [2024-12-09 05:05:28.613978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:52.318 "tick_rate": 2300000000, 00:10:52.318 "poll_groups": [ 00:10:52.318 { 00:10:52.318 "name": "nvmf_tgt_poll_group_000", 00:10:52.318 "admin_qpairs": 0, 00:10:52.318 "io_qpairs": 0, 00:10:52.318 "current_admin_qpairs": 0, 00:10:52.318 "current_io_qpairs": 0, 00:10:52.318 "pending_bdev_io": 0, 00:10:52.318 "completed_nvme_io": 0, 00:10:52.318 "transports": [] 00:10:52.318 }, 00:10:52.318 { 00:10:52.318 "name": "nvmf_tgt_poll_group_001", 00:10:52.318 "admin_qpairs": 0, 00:10:52.318 "io_qpairs": 0, 00:10:52.318 "current_admin_qpairs": 0, 00:10:52.318 "current_io_qpairs": 0, 00:10:52.318 "pending_bdev_io": 0, 00:10:52.318 "completed_nvme_io": 0, 00:10:52.318 "transports": [] 00:10:52.318 }, 00:10:52.318 { 00:10:52.318 "name": "nvmf_tgt_poll_group_002", 00:10:52.318 "admin_qpairs": 0, 00:10:52.318 "io_qpairs": 0, 00:10:52.318 "current_admin_qpairs": 0, 00:10:52.318 "current_io_qpairs": 0, 00:10:52.318 "pending_bdev_io": 0, 00:10:52.318 "completed_nvme_io": 0, 00:10:52.318 "transports": [] 00:10:52.318 }, 00:10:52.318 { 00:10:52.318 "name": "nvmf_tgt_poll_group_003", 00:10:52.318 "admin_qpairs": 0, 00:10:52.318 "io_qpairs": 0, 00:10:52.318 "current_admin_qpairs": 0, 00:10:52.318 "current_io_qpairs": 0, 00:10:52.318 "pending_bdev_io": 0, 00:10:52.318 "completed_nvme_io": 0, 00:10:52.318 "transports": [] 00:10:52.318 } 00:10:52.318 ] 00:10:52.318 }' 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.318 [2024-12-09 05:05:28.865635] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.318 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:52.318 "tick_rate": 2300000000, 00:10:52.318 "poll_groups": [ 00:10:52.318 { 00:10:52.318 "name": "nvmf_tgt_poll_group_000", 00:10:52.318 "admin_qpairs": 0, 00:10:52.318 "io_qpairs": 0, 00:10:52.318 "current_admin_qpairs": 0, 00:10:52.318 "current_io_qpairs": 0, 00:10:52.318 "pending_bdev_io": 0, 00:10:52.318 "completed_nvme_io": 0, 00:10:52.318 "transports": [ 00:10:52.319 { 00:10:52.319 "trtype": "TCP" 00:10:52.319 } 00:10:52.319 ] 00:10:52.319 }, 00:10:52.319 { 00:10:52.319 "name": "nvmf_tgt_poll_group_001", 00:10:52.319 "admin_qpairs": 0, 00:10:52.319 "io_qpairs": 0, 00:10:52.319 "current_admin_qpairs": 0, 00:10:52.319 "current_io_qpairs": 0, 00:10:52.319 "pending_bdev_io": 0, 00:10:52.319 "completed_nvme_io": 0, 00:10:52.319 "transports": [ 00:10:52.319 { 00:10:52.319 "trtype": "TCP" 00:10:52.319 } 00:10:52.319 ] 00:10:52.319 }, 00:10:52.319 { 00:10:52.319 "name": "nvmf_tgt_poll_group_002", 00:10:52.319 "admin_qpairs": 0, 00:10:52.319 "io_qpairs": 0, 00:10:52.319 "current_admin_qpairs": 0, 00:10:52.319 "current_io_qpairs": 0, 00:10:52.319 "pending_bdev_io": 0, 00:10:52.319 "completed_nvme_io": 0, 00:10:52.319 "transports": [ 00:10:52.319 { 00:10:52.319 "trtype": "TCP" 00:10:52.319 } 00:10:52.319 ] 00:10:52.319 }, 00:10:52.319 { 00:10:52.319 "name": "nvmf_tgt_poll_group_003", 00:10:52.319 "admin_qpairs": 0, 00:10:52.319 "io_qpairs": 0, 00:10:52.319 "current_admin_qpairs": 0, 00:10:52.319 "current_io_qpairs": 0, 00:10:52.319 "pending_bdev_io": 0, 00:10:52.319 "completed_nvme_io": 0, 00:10:52.319 "transports": [ 00:10:52.319 { 00:10:52.319 "trtype": "TCP" 00:10:52.319 } 00:10:52.319 ] 00:10:52.319 } 00:10:52.319 ] 00:10:52.319 }' 00:10:52.319 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:52.319 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:52.319 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:52.319 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:52.319 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:52.319 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:52.319 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:52.319 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:52.319 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:52.578 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:52.578 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:52.578 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:52.578 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:52.578 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:52.578 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.578 05:05:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.578 Malloc1 00:10:52.578 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.578 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:52.578 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.578 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.578 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.578 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:52.578 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.578 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.578 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.578 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:52.578 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.578 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.578 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.578 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.578 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.578 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.578 [2024-12-09 05:05:29.041843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.578 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.579 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:10:52.579 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:52.579 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:10:52.579 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:52.579 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.579 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:52.579 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.579 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:52.579 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.579 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:52.579 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:52.579 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:10:52.579 [2024-12-09 05:05:29.070455] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:10:52.579 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:52.579 could not add new controller: failed to write to nvme-fabrics device 00:10:52.579 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:52.579 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:52.579 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:52.579 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:52.579 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:52.579 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.579 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.579 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.579 05:05:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:53.957 05:05:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:53.957 05:05:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:53.957 05:05:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:53.957 05:05:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:53.957 05:05:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:55.864 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:55.864 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:55.864 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:55.864 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:55.864 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:55.864 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:55.864 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:55.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.864 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:55.864 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:55.864 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:55.864 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.864 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:55.864 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.864 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:55.864 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:55.864 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.864 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.864 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.865 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.865 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:55.865 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.865 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:55.865 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:55.865 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:55.865 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:55.865 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:55.865 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:55.865 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:55.865 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:55.865 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.865 [2024-12-09 05:05:32.378876] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:10:55.865 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:55.865 could not add new controller: failed to write to nvme-fabrics device 00:10:55.865 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:55.865 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:55.865 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:55.865 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:55.865 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:55.865 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.865 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.865 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.865 05:05:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:57.244 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:57.244 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:57.244 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:57.244 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:57.244 05:05:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:59.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.154 [2024-12-09 05:05:35.743399] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.154 05:05:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:00.530 05:05:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:00.530 05:05:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:00.530 05:05:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:00.530 05:05:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:00.530 05:05:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:02.430 05:05:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:02.430 05:05:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:02.430 05:05:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:02.430 05:05:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:02.430 05:05:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:02.430 05:05:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:02.430 05:05:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:02.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.430 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:02.430 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:02.430 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:02.430 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.430 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:02.430 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.430 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:02.430 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:02.430 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.430 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.689 [2024-12-09 05:05:39.104831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.689 05:05:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:03.624 05:05:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:03.624 05:05:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:03.624 05:05:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:03.624 05:05:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:03.624 05:05:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.162 [2024-12-09 05:05:42.438596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.162 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:07.103 05:05:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:07.103 05:05:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:07.103 05:05:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:07.103 05:05:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:07.103 05:05:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:09.083 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:09.083 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:09.083 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.083 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:09.083 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.083 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:09.083 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:09.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.083 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:09.083 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:09.083 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:09.083 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.366 [2024-12-09 05:05:45.747348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.366 05:05:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:10.407 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:10.407 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:10.407 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:10.407 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:10.407 05:05:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:12.310 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:12.310 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:12.310 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:12.568 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:12.568 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:12.568 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:12.568 05:05:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:12.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.569 [2024-12-09 05:05:49.099202] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.569 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:13.946 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:13.946 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:13.946 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.946 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:13.946 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:15.848 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:15.848 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:15.848 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:15.849 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:15.849 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:15.849 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:15.849 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.849 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.849 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:15.849 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:15.849 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.108 [2024-12-09 05:05:52.555266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.108 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.109 [2024-12-09 05:05:52.603337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.109 [2024-12-09 05:05:52.651472] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.109 [2024-12-09 05:05:52.699654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.109 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.109 [2024-12-09 05:05:52.747830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:16.369 "tick_rate": 2300000000, 00:11:16.369 "poll_groups": [ 00:11:16.369 { 00:11:16.369 "name": "nvmf_tgt_poll_group_000", 00:11:16.369 "admin_qpairs": 2, 00:11:16.369 "io_qpairs": 168, 00:11:16.369 "current_admin_qpairs": 0, 00:11:16.369 "current_io_qpairs": 0, 00:11:16.369 "pending_bdev_io": 0, 00:11:16.369 "completed_nvme_io": 219, 00:11:16.369 "transports": [ 00:11:16.369 { 00:11:16.369 "trtype": "TCP" 00:11:16.369 } 00:11:16.369 ] 00:11:16.369 }, 00:11:16.369 { 00:11:16.369 "name": "nvmf_tgt_poll_group_001", 00:11:16.369 "admin_qpairs": 2, 00:11:16.369 "io_qpairs": 168, 00:11:16.369 "current_admin_qpairs": 0, 00:11:16.369 "current_io_qpairs": 0, 00:11:16.369 "pending_bdev_io": 0, 00:11:16.369 "completed_nvme_io": 218, 00:11:16.369 "transports": [ 00:11:16.369 { 00:11:16.369 "trtype": "TCP" 00:11:16.369 } 00:11:16.369 ] 00:11:16.369 }, 00:11:16.369 { 00:11:16.369 "name": "nvmf_tgt_poll_group_002", 00:11:16.369 "admin_qpairs": 1, 00:11:16.369 "io_qpairs": 168, 00:11:16.369 "current_admin_qpairs": 0, 00:11:16.369 "current_io_qpairs": 0, 00:11:16.369 "pending_bdev_io": 0, 00:11:16.369 "completed_nvme_io": 315, 00:11:16.369 "transports": [ 00:11:16.369 { 00:11:16.369 "trtype": "TCP" 00:11:16.369 } 00:11:16.369 ] 00:11:16.369 }, 00:11:16.369 { 00:11:16.369 "name": "nvmf_tgt_poll_group_003", 00:11:16.369 "admin_qpairs": 2, 00:11:16.369 "io_qpairs": 168, 00:11:16.369 "current_admin_qpairs": 0, 00:11:16.369 "current_io_qpairs": 0, 00:11:16.369 "pending_bdev_io": 0, 00:11:16.369 "completed_nvme_io": 270, 00:11:16.369 "transports": [ 00:11:16.369 { 00:11:16.369 "trtype": "TCP" 00:11:16.369 } 00:11:16.369 ] 00:11:16.369 } 00:11:16.369 ] 00:11:16.369 }' 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.369 rmmod nvme_tcp 00:11:16.369 rmmod nvme_fabrics 00:11:16.369 rmmod nvme_keyring 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3511554 ']' 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3511554 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3511554 ']' 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3511554 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.369 05:05:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3511554 00:11:16.369 05:05:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.369 05:05:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.370 05:05:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3511554' 00:11:16.370 killing process with pid 3511554 00:11:16.370 05:05:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3511554 00:11:16.370 05:05:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3511554 00:11:16.629 05:05:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.629 05:05:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:16.629 05:05:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:16.629 05:05:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:16.629 05:05:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:16.629 05:05:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.629 05:05:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.629 05:05:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.629 05:05:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:16.629 05:05:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.629 05:05:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.629 05:05:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.166 00:11:19.166 real 0m32.985s 00:11:19.166 user 1m39.609s 00:11:19.166 sys 0m6.507s 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.166 ************************************ 00:11:19.166 END TEST nvmf_rpc 00:11:19.166 ************************************ 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.166 ************************************ 00:11:19.166 START TEST nvmf_invalid 00:11:19.166 ************************************ 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:19.166 * Looking for test storage... 00:11:19.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:19.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.166 --rc genhtml_branch_coverage=1 00:11:19.166 --rc genhtml_function_coverage=1 00:11:19.166 --rc genhtml_legend=1 00:11:19.166 --rc geninfo_all_blocks=1 00:11:19.166 --rc geninfo_unexecuted_blocks=1 00:11:19.166 00:11:19.166 ' 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:19.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.166 --rc genhtml_branch_coverage=1 00:11:19.166 --rc genhtml_function_coverage=1 00:11:19.166 --rc genhtml_legend=1 00:11:19.166 --rc geninfo_all_blocks=1 00:11:19.166 --rc geninfo_unexecuted_blocks=1 00:11:19.166 00:11:19.166 ' 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:19.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.166 --rc genhtml_branch_coverage=1 00:11:19.166 --rc genhtml_function_coverage=1 00:11:19.166 --rc genhtml_legend=1 00:11:19.166 --rc geninfo_all_blocks=1 00:11:19.166 --rc geninfo_unexecuted_blocks=1 00:11:19.166 00:11:19.166 ' 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:19.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.166 --rc genhtml_branch_coverage=1 00:11:19.166 --rc genhtml_function_coverage=1 00:11:19.166 --rc genhtml_legend=1 00:11:19.166 --rc geninfo_all_blocks=1 00:11:19.166 --rc geninfo_unexecuted_blocks=1 00:11:19.166 00:11:19.166 ' 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.166 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.167 05:05:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:24.439 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:24.439 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:24.439 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:24.440 Found net devices under 0000:86:00.0: cvl_0_0 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:24.440 Found net devices under 0000:86:00.1: cvl_0_1 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.440 05:06:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.440 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.440 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:24.440 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:24.440 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:24.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:11:24.699 00:11:24.699 --- 10.0.0.2 ping statistics --- 00:11:24.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.699 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:11:24.699 00:11:24.699 --- 10.0.0.1 ping statistics --- 00:11:24.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.699 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3519418 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3519418 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3519418 ']' 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.699 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:24.699 [2024-12-09 05:06:01.246189] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:11:24.699 [2024-12-09 05:06:01.246236] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.699 [2024-12-09 05:06:01.315871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.958 [2024-12-09 05:06:01.359700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.958 [2024-12-09 05:06:01.359736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.958 [2024-12-09 05:06:01.359744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.958 [2024-12-09 05:06:01.359750] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.958 [2024-12-09 05:06:01.359755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.958 [2024-12-09 05:06:01.361219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.958 [2024-12-09 05:06:01.361308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.958 [2024-12-09 05:06:01.361396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.958 [2024-12-09 05:06:01.361397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.958 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.958 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:24.958 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:24.958 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:24.958 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:24.958 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.958 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:24.958 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11997 00:11:25.217 [2024-12-09 05:06:01.677034] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:25.217 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:25.217 { 00:11:25.217 "nqn": "nqn.2016-06.io.spdk:cnode11997", 00:11:25.217 "tgt_name": "foobar", 00:11:25.217 "method": "nvmf_create_subsystem", 00:11:25.217 "req_id": 1 00:11:25.217 } 00:11:25.217 Got JSON-RPC error response 00:11:25.217 response: 00:11:25.217 { 00:11:25.217 "code": -32603, 00:11:25.217 "message": "Unable to find target foobar" 00:11:25.217 }' 00:11:25.217 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:25.217 { 00:11:25.217 "nqn": "nqn.2016-06.io.spdk:cnode11997", 00:11:25.217 "tgt_name": "foobar", 00:11:25.217 "method": "nvmf_create_subsystem", 00:11:25.217 "req_id": 1 00:11:25.217 } 00:11:25.217 Got JSON-RPC error response 00:11:25.217 response: 00:11:25.217 { 00:11:25.217 "code": -32603, 00:11:25.217 "message": "Unable to find target foobar" 00:11:25.217 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:25.217 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:25.217 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode11133 00:11:25.476 [2024-12-09 05:06:01.889764] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11133: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:25.476 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:25.476 { 00:11:25.476 "nqn": "nqn.2016-06.io.spdk:cnode11133", 00:11:25.476 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:25.476 "method": "nvmf_create_subsystem", 00:11:25.476 "req_id": 1 00:11:25.476 } 00:11:25.476 Got JSON-RPC error response 00:11:25.476 response: 00:11:25.476 { 00:11:25.476 "code": -32602, 00:11:25.476 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:25.476 }' 00:11:25.476 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:25.476 { 00:11:25.476 "nqn": "nqn.2016-06.io.spdk:cnode11133", 00:11:25.476 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:25.476 "method": "nvmf_create_subsystem", 00:11:25.476 "req_id": 1 00:11:25.476 } 00:11:25.476 Got JSON-RPC error response 00:11:25.476 response: 00:11:25.476 { 00:11:25.476 "code": -32602, 00:11:25.477 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:25.477 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:25.477 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:25.477 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode29694 00:11:25.477 [2024-12-09 05:06:02.094395] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29694: invalid model number 'SPDK_Controller' 00:11:25.736 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:25.737 { 00:11:25.737 "nqn": "nqn.2016-06.io.spdk:cnode29694", 00:11:25.737 "model_number": "SPDK_Controller\u001f", 00:11:25.737 "method": "nvmf_create_subsystem", 00:11:25.737 "req_id": 1 00:11:25.737 } 00:11:25.737 Got JSON-RPC error response 00:11:25.737 response: 00:11:25.737 { 00:11:25.737 "code": -32602, 00:11:25.737 "message": "Invalid MN SPDK_Controller\u001f" 00:11:25.737 }' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:25.737 { 00:11:25.737 "nqn": "nqn.2016-06.io.spdk:cnode29694", 00:11:25.737 "model_number": "SPDK_Controller\u001f", 00:11:25.737 "method": "nvmf_create_subsystem", 00:11:25.737 "req_id": 1 00:11:25.737 } 00:11:25.737 Got JSON-RPC error response 00:11:25.737 response: 00:11:25.737 { 00:11:25.737 "code": -32602, 00:11:25.737 "message": "Invalid MN SPDK_Controller\u001f" 00:11:25.737 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.737 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:11:25.738 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:25.738 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:11:25.738 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.738 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.738 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:25.738 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:25.738 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:25.738 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.738 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.738 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:25.738 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:25.738 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:25.738 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.738 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.738 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ : == \- ]] 00:11:25.738 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ':8[MWL|E5Y&m;)Tgu,.D`' 00:11:25.738 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ':8[MWL|E5Y&m;)Tgu,.D`' nqn.2016-06.io.spdk:cnode23205 00:11:25.998 [2024-12-09 05:06:02.447607] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23205: invalid serial number ':8[MWL|E5Y&m;)Tgu,.D`' 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:25.998 { 00:11:25.998 "nqn": "nqn.2016-06.io.spdk:cnode23205", 00:11:25.998 "serial_number": ":8[MWL|E5Y&m;)Tgu,.D`", 00:11:25.998 "method": "nvmf_create_subsystem", 00:11:25.998 "req_id": 1 00:11:25.998 } 00:11:25.998 Got JSON-RPC error response 00:11:25.998 response: 00:11:25.998 { 00:11:25.998 "code": -32602, 00:11:25.998 "message": "Invalid SN :8[MWL|E5Y&m;)Tgu,.D`" 00:11:25.998 }' 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:25.998 { 00:11:25.998 "nqn": "nqn.2016-06.io.spdk:cnode23205", 00:11:25.998 "serial_number": ":8[MWL|E5Y&m;)Tgu,.D`", 00:11:25.998 "method": "nvmf_create_subsystem", 00:11:25.998 "req_id": 1 00:11:25.998 } 00:11:25.998 Got JSON-RPC error response 00:11:25.998 response: 00:11:25.998 { 00:11:25.998 "code": -32602, 00:11:25.998 "message": "Invalid SN :8[MWL|E5Y&m;)Tgu,.D`" 00:11:25.998 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:25.998 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:25.999 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:11:26.258 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ > == \- ]] 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '>QLi78Kj/"af%_s9O,t6a`Q>PB ly9l\$8,Jp;Z"_' 00:11:26.259 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '>QLi78Kj/"af%_s9O,t6a`Q>PB ly9l\$8,Jp;Z"_' nqn.2016-06.io.spdk:cnode19168 00:11:26.518 [2024-12-09 05:06:02.921167] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19168: invalid model number '>QLi78Kj/"af%_s9O,t6a`Q>PB ly9l\$8,Jp;Z"_' 00:11:26.518 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:26.518 { 00:11:26.518 "nqn": "nqn.2016-06.io.spdk:cnode19168", 00:11:26.518 "model_number": ">QLi78Kj/\"af%_s9O,t6a`Q>PB ly9l\\$8,Jp;Z\"_", 00:11:26.518 "method": "nvmf_create_subsystem", 00:11:26.518 "req_id": 1 00:11:26.518 } 00:11:26.518 Got JSON-RPC error response 00:11:26.518 response: 00:11:26.518 { 00:11:26.518 "code": -32602, 00:11:26.518 "message": "Invalid MN >QLi78Kj/\"af%_s9O,t6a`Q>PB ly9l\\$8,Jp;Z\"_" 00:11:26.518 }' 00:11:26.518 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:26.518 { 00:11:26.518 "nqn": "nqn.2016-06.io.spdk:cnode19168", 00:11:26.518 "model_number": ">QLi78Kj/\"af%_s9O,t6a`Q>PB ly9l\\$8,Jp;Z\"_", 00:11:26.518 "method": "nvmf_create_subsystem", 00:11:26.518 "req_id": 1 00:11:26.518 } 00:11:26.518 Got JSON-RPC error response 00:11:26.518 response: 00:11:26.518 { 00:11:26.518 "code": -32602, 00:11:26.518 "message": "Invalid MN >QLi78Kj/\"af%_s9O,t6a`Q>PB ly9l\\$8,Jp;Z\"_" 00:11:26.518 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:26.518 05:06:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:26.518 [2024-12-09 05:06:03.129927] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.777 05:06:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:26.777 05:06:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:26.777 05:06:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:26.777 05:06:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:26.777 05:06:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:26.777 05:06:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:27.036 [2024-12-09 05:06:03.563363] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:27.036 05:06:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:11:27.036 { 00:11:27.036 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:27.036 "listen_address": { 00:11:27.036 "trtype": "tcp", 00:11:27.036 "traddr": "", 00:11:27.036 "trsvcid": "4421" 00:11:27.036 }, 00:11:27.036 "method": "nvmf_subsystem_remove_listener", 00:11:27.036 "req_id": 1 00:11:27.036 } 00:11:27.036 Got JSON-RPC error response 00:11:27.036 response: 00:11:27.036 { 00:11:27.036 "code": -32602, 00:11:27.036 "message": "Invalid parameters" 00:11:27.036 }' 00:11:27.036 05:06:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:11:27.036 { 00:11:27.036 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:27.036 "listen_address": { 00:11:27.036 "trtype": "tcp", 00:11:27.036 "traddr": "", 00:11:27.036 "trsvcid": "4421" 00:11:27.036 }, 00:11:27.036 "method": "nvmf_subsystem_remove_listener", 00:11:27.036 "req_id": 1 00:11:27.036 } 00:11:27.036 Got JSON-RPC error response 00:11:27.036 response: 00:11:27.036 { 00:11:27.036 "code": -32602, 00:11:27.036 "message": "Invalid parameters" 00:11:27.036 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:27.036 05:06:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13348 -i 0 00:11:27.295 [2024-12-09 05:06:03.780040] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13348: invalid cntlid range [0-65519] 00:11:27.295 05:06:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:11:27.295 { 00:11:27.295 "nqn": "nqn.2016-06.io.spdk:cnode13348", 00:11:27.295 "min_cntlid": 0, 00:11:27.295 "method": "nvmf_create_subsystem", 00:11:27.295 "req_id": 1 00:11:27.295 } 00:11:27.295 Got JSON-RPC error response 00:11:27.295 response: 00:11:27.295 { 00:11:27.295 "code": -32602, 00:11:27.295 "message": "Invalid cntlid range [0-65519]" 00:11:27.295 }' 00:11:27.295 05:06:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:11:27.295 { 00:11:27.295 "nqn": "nqn.2016-06.io.spdk:cnode13348", 00:11:27.295 "min_cntlid": 0, 00:11:27.295 "method": "nvmf_create_subsystem", 00:11:27.295 "req_id": 1 00:11:27.295 } 00:11:27.295 Got JSON-RPC error response 00:11:27.295 response: 00:11:27.295 { 00:11:27.295 "code": -32602, 00:11:27.295 "message": "Invalid cntlid range [0-65519]" 00:11:27.295 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:27.295 05:06:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10725 -i 65520 00:11:27.555 [2024-12-09 05:06:03.980685] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10725: invalid cntlid range [65520-65519] 00:11:27.555 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:27.555 { 00:11:27.555 "nqn": "nqn.2016-06.io.spdk:cnode10725", 00:11:27.555 "min_cntlid": 65520, 00:11:27.555 "method": "nvmf_create_subsystem", 00:11:27.555 "req_id": 1 00:11:27.555 } 00:11:27.555 Got JSON-RPC error response 00:11:27.555 response: 00:11:27.555 { 00:11:27.555 "code": -32602, 00:11:27.555 "message": "Invalid cntlid range [65520-65519]" 00:11:27.555 }' 00:11:27.555 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:27.555 { 00:11:27.555 "nqn": "nqn.2016-06.io.spdk:cnode10725", 00:11:27.555 "min_cntlid": 65520, 00:11:27.555 "method": "nvmf_create_subsystem", 00:11:27.555 "req_id": 1 00:11:27.555 } 00:11:27.555 Got JSON-RPC error response 00:11:27.555 response: 00:11:27.555 { 00:11:27.555 "code": -32602, 00:11:27.555 "message": "Invalid cntlid range [65520-65519]" 00:11:27.555 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:27.555 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4524 -I 0 00:11:27.555 [2024-12-09 05:06:04.189385] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4524: invalid cntlid range [1-0] 00:11:27.813 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:27.813 { 00:11:27.813 "nqn": "nqn.2016-06.io.spdk:cnode4524", 00:11:27.813 "max_cntlid": 0, 00:11:27.813 "method": "nvmf_create_subsystem", 00:11:27.813 "req_id": 1 00:11:27.813 } 00:11:27.813 Got JSON-RPC error response 00:11:27.813 response: 00:11:27.813 { 00:11:27.813 "code": -32602, 00:11:27.813 "message": "Invalid cntlid range [1-0]" 00:11:27.813 }' 00:11:27.813 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:27.813 { 00:11:27.813 "nqn": "nqn.2016-06.io.spdk:cnode4524", 00:11:27.814 "max_cntlid": 0, 00:11:27.814 "method": "nvmf_create_subsystem", 00:11:27.814 "req_id": 1 00:11:27.814 } 00:11:27.814 Got JSON-RPC error response 00:11:27.814 response: 00:11:27.814 { 00:11:27.814 "code": -32602, 00:11:27.814 "message": "Invalid cntlid range [1-0]" 00:11:27.814 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:27.814 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8035 -I 65520 00:11:27.814 [2024-12-09 05:06:04.398094] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8035: invalid cntlid range [1-65520] 00:11:27.814 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:27.814 { 00:11:27.814 "nqn": "nqn.2016-06.io.spdk:cnode8035", 00:11:27.814 "max_cntlid": 65520, 00:11:27.814 "method": "nvmf_create_subsystem", 00:11:27.814 "req_id": 1 00:11:27.814 } 00:11:27.814 Got JSON-RPC error response 00:11:27.814 response: 00:11:27.814 { 00:11:27.814 "code": -32602, 00:11:27.814 "message": "Invalid cntlid range [1-65520]" 00:11:27.814 }' 00:11:27.814 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:27.814 { 00:11:27.814 "nqn": "nqn.2016-06.io.spdk:cnode8035", 00:11:27.814 "max_cntlid": 65520, 00:11:27.814 "method": "nvmf_create_subsystem", 00:11:27.814 "req_id": 1 00:11:27.814 } 00:11:27.814 Got JSON-RPC error response 00:11:27.814 response: 00:11:27.814 { 00:11:27.814 "code": -32602, 00:11:27.814 "message": "Invalid cntlid range [1-65520]" 00:11:27.814 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:27.814 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28259 -i 6 -I 5 00:11:28.073 [2024-12-09 05:06:04.602762] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28259: invalid cntlid range [6-5] 00:11:28.073 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:28.073 { 00:11:28.073 "nqn": "nqn.2016-06.io.spdk:cnode28259", 00:11:28.073 "min_cntlid": 6, 00:11:28.073 "max_cntlid": 5, 00:11:28.073 "method": "nvmf_create_subsystem", 00:11:28.073 "req_id": 1 00:11:28.073 } 00:11:28.073 Got JSON-RPC error response 00:11:28.073 response: 00:11:28.073 { 00:11:28.073 "code": -32602, 00:11:28.073 "message": "Invalid cntlid range [6-5]" 00:11:28.073 }' 00:11:28.073 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:28.073 { 00:11:28.073 "nqn": "nqn.2016-06.io.spdk:cnode28259", 00:11:28.073 "min_cntlid": 6, 00:11:28.073 "max_cntlid": 5, 00:11:28.073 "method": "nvmf_create_subsystem", 00:11:28.073 "req_id": 1 00:11:28.073 } 00:11:28.073 Got JSON-RPC error response 00:11:28.073 response: 00:11:28.073 { 00:11:28.073 "code": -32602, 00:11:28.073 "message": "Invalid cntlid range [6-5]" 00:11:28.073 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:28.073 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:28.331 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:28.332 { 00:11:28.332 "name": "foobar", 00:11:28.332 "method": "nvmf_delete_target", 00:11:28.332 "req_id": 1 00:11:28.332 } 00:11:28.332 Got JSON-RPC error response 00:11:28.332 response: 00:11:28.332 { 00:11:28.332 "code": -32602, 00:11:28.332 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:28.332 }' 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:28.332 { 00:11:28.332 "name": "foobar", 00:11:28.332 "method": "nvmf_delete_target", 00:11:28.332 "req_id": 1 00:11:28.332 } 00:11:28.332 Got JSON-RPC error response 00:11:28.332 response: 00:11:28.332 { 00:11:28.332 "code": -32602, 00:11:28.332 "message": "The specified target doesn't exist, cannot delete it." 00:11:28.332 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:28.332 rmmod nvme_tcp 00:11:28.332 rmmod nvme_fabrics 00:11:28.332 rmmod nvme_keyring 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3519418 ']' 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3519418 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3519418 ']' 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3519418 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3519418 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3519418' 00:11:28.332 killing process with pid 3519418 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3519418 00:11:28.332 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3519418 00:11:28.591 05:06:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:28.591 05:06:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:28.591 05:06:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:28.591 05:06:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:11:28.591 05:06:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:28.591 05:06:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:11:28.591 05:06:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:11:28.591 05:06:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:28.591 05:06:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:28.591 05:06:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.591 05:06:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.591 05:06:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.494 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:30.494 00:11:30.494 real 0m11.765s 00:11:30.494 user 0m18.814s 00:11:30.494 sys 0m5.188s 00:11:30.494 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.494 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:30.494 ************************************ 00:11:30.494 END TEST nvmf_invalid 00:11:30.494 ************************************ 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:30.753 ************************************ 00:11:30.753 START TEST nvmf_connect_stress 00:11:30.753 ************************************ 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:30.753 * Looking for test storage... 00:11:30.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.753 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:30.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.754 --rc genhtml_branch_coverage=1 00:11:30.754 --rc genhtml_function_coverage=1 00:11:30.754 --rc genhtml_legend=1 00:11:30.754 --rc geninfo_all_blocks=1 00:11:30.754 --rc geninfo_unexecuted_blocks=1 00:11:30.754 00:11:30.754 ' 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:30.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.754 --rc genhtml_branch_coverage=1 00:11:30.754 --rc genhtml_function_coverage=1 00:11:30.754 --rc genhtml_legend=1 00:11:30.754 --rc geninfo_all_blocks=1 00:11:30.754 --rc geninfo_unexecuted_blocks=1 00:11:30.754 00:11:30.754 ' 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:30.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.754 --rc genhtml_branch_coverage=1 00:11:30.754 --rc genhtml_function_coverage=1 00:11:30.754 --rc genhtml_legend=1 00:11:30.754 --rc geninfo_all_blocks=1 00:11:30.754 --rc geninfo_unexecuted_blocks=1 00:11:30.754 00:11:30.754 ' 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:30.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.754 --rc genhtml_branch_coverage=1 00:11:30.754 --rc genhtml_function_coverage=1 00:11:30.754 --rc genhtml_legend=1 00:11:30.754 --rc geninfo_all_blocks=1 00:11:30.754 --rc geninfo_unexecuted_blocks=1 00:11:30.754 00:11:30.754 ' 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.754 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:31.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:31.014 05:06:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:36.292 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:36.292 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:36.292 Found net devices under 0000:86:00.0: cvl_0_0 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:36.292 Found net devices under 0000:86:00.1: cvl_0_1 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:36.292 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.293 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:36.293 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:36.293 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:36.293 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:36.293 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:36.293 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:36.293 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:36.293 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:36.552 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:36.552 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:36.552 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:36.552 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:36.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:11:36.552 00:11:36.552 --- 10.0.0.2 ping statistics --- 00:11:36.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.552 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:11:36.552 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:36.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:11:36.552 00:11:36.552 --- 10.0.0.1 ping statistics --- 00:11:36.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.552 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:11:36.552 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.552 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:11:36.552 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:36.552 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.552 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:36.552 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:36.552 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.552 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:36.552 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:36.552 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:36.552 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:36.552 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:36.552 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.552 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3524037 00:11:36.552 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3524037 00:11:36.552 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:36.552 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3524037 ']' 00:11:36.552 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.552 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:36.552 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.552 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:36.552 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.552 [2024-12-09 05:06:13.087133] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:11:36.552 [2024-12-09 05:06:13.087181] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.552 [2024-12-09 05:06:13.157158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:36.812 [2024-12-09 05:06:13.199921] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.812 [2024-12-09 05:06:13.199957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.812 [2024-12-09 05:06:13.199965] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.812 [2024-12-09 05:06:13.199971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.812 [2024-12-09 05:06:13.199976] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.812 [2024-12-09 05:06:13.201354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.812 [2024-12-09 05:06:13.201441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.812 [2024-12-09 05:06:13.201443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.812 [2024-12-09 05:06:13.339227] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.812 [2024-12-09 05:06:13.359456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.812 NULL1 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3524216 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:36.812 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:36.813 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:36.813 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:36.813 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:36.813 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:36.813 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:36.813 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:36.813 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:36.813 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:36.813 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:36.813 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:36.813 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:36.813 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:36.813 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:36.813 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:36.813 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:36.813 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:36.813 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:36.813 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:36.813 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:36.813 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:36.813 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:37.071 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:37.071 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:37.071 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:37.071 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.071 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.071 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.329 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.329 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:37.329 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.329 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.329 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.587 05:06:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.587 05:06:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:37.587 05:06:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.587 05:06:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.587 05:06:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.845 05:06:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.845 05:06:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:37.845 05:06:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.845 05:06:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.845 05:06:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.412 05:06:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.412 05:06:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:38.412 05:06:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.412 05:06:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.412 05:06:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.669 05:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.670 05:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:38.670 05:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.670 05:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.670 05:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.927 05:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.927 05:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:38.927 05:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.927 05:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.927 05:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.206 05:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.206 05:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:39.206 05:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.206 05:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.206 05:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.464 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.464 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:39.464 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.464 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.464 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.029 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.029 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:40.029 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.029 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.029 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.287 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.287 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:40.287 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.287 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.287 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.545 05:06:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.545 05:06:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:40.545 05:06:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.545 05:06:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.545 05:06:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.802 05:06:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.802 05:06:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:40.802 05:06:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.802 05:06:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.802 05:06:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.060 05:06:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.060 05:06:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:41.060 05:06:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.060 05:06:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.060 05:06:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.636 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.636 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:41.636 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.636 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.636 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.895 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.895 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:41.895 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.895 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.895 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.154 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.154 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:42.154 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.154 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.154 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.413 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.413 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:42.413 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.413 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.413 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.672 05:06:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.673 05:06:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:42.673 05:06:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.673 05:06:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.673 05:06:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.241 05:06:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.241 05:06:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:43.241 05:06:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.241 05:06:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.241 05:06:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.500 05:06:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.500 05:06:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:43.500 05:06:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.500 05:06:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.500 05:06:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.759 05:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.759 05:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:43.759 05:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.759 05:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.759 05:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.018 05:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.018 05:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:44.018 05:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.018 05:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.018 05:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.587 05:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.587 05:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:44.587 05:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.587 05:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.587 05:06:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.846 05:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.846 05:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:44.846 05:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.847 05:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.847 05:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.105 05:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.106 05:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:45.106 05:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.106 05:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.106 05:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.365 05:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.365 05:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:45.365 05:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.365 05:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.365 05:06:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.624 05:06:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.624 05:06:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:45.624 05:06:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.624 05:06:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.624 05:06:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.191 05:06:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.191 05:06:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:46.191 05:06:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.191 05:06:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.191 05:06:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.469 05:06:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.469 05:06:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:46.469 05:06:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.469 05:06:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.469 05:06:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.801 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.801 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:46.801 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.801 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.801 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.079 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3524216 00:11:47.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3524216) - No such process 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3524216 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:47.079 rmmod nvme_tcp 00:11:47.079 rmmod nvme_fabrics 00:11:47.079 rmmod nvme_keyring 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3524037 ']' 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3524037 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3524037 ']' 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3524037 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3524037 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3524037' 00:11:47.079 killing process with pid 3524037 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3524037 00:11:47.079 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3524037 00:11:47.339 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:47.339 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:47.339 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:47.339 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:11:47.339 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:11:47.339 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:11:47.339 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:47.339 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:47.339 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:47.339 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.339 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.339 05:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.874 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:49.874 00:11:49.874 real 0m18.735s 00:11:49.874 user 0m39.405s 00:11:49.874 sys 0m8.307s 00:11:49.874 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.874 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.874 ************************************ 00:11:49.874 END TEST nvmf_connect_stress 00:11:49.874 ************************************ 00:11:49.874 05:06:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:49.875 05:06:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:49.875 05:06:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.875 05:06:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:49.875 ************************************ 00:11:49.875 START TEST nvmf_fused_ordering 00:11:49.875 ************************************ 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:49.875 * Looking for test storage... 00:11:49.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:49.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.875 --rc genhtml_branch_coverage=1 00:11:49.875 --rc genhtml_function_coverage=1 00:11:49.875 --rc genhtml_legend=1 00:11:49.875 --rc geninfo_all_blocks=1 00:11:49.875 --rc geninfo_unexecuted_blocks=1 00:11:49.875 00:11:49.875 ' 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:49.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.875 --rc genhtml_branch_coverage=1 00:11:49.875 --rc genhtml_function_coverage=1 00:11:49.875 --rc genhtml_legend=1 00:11:49.875 --rc geninfo_all_blocks=1 00:11:49.875 --rc geninfo_unexecuted_blocks=1 00:11:49.875 00:11:49.875 ' 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:49.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.875 --rc genhtml_branch_coverage=1 00:11:49.875 --rc genhtml_function_coverage=1 00:11:49.875 --rc genhtml_legend=1 00:11:49.875 --rc geninfo_all_blocks=1 00:11:49.875 --rc geninfo_unexecuted_blocks=1 00:11:49.875 00:11:49.875 ' 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:49.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.875 --rc genhtml_branch_coverage=1 00:11:49.875 --rc genhtml_function_coverage=1 00:11:49.875 --rc genhtml_legend=1 00:11:49.875 --rc geninfo_all_blocks=1 00:11:49.875 --rc geninfo_unexecuted_blocks=1 00:11:49.875 00:11:49.875 ' 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.875 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:49.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:11:49.876 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:55.151 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:55.151 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:55.151 Found net devices under 0000:86:00.0: cvl_0_0 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:55.151 Found net devices under 0000:86:00.1: cvl_0_1 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:55.151 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:55.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:11:55.152 00:11:55.152 --- 10.0.0.2 ping statistics --- 00:11:55.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.152 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:55.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:11:55.152 00:11:55.152 --- 10.0.0.1 ping statistics --- 00:11:55.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.152 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3529435 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3529435 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3529435 ']' 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:55.152 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:55.152 [2024-12-09 05:06:31.707624] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:11:55.152 [2024-12-09 05:06:31.707671] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.152 [2024-12-09 05:06:31.776679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.411 [2024-12-09 05:06:31.818594] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.411 [2024-12-09 05:06:31.818627] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.411 [2024-12-09 05:06:31.818635] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.411 [2024-12-09 05:06:31.818644] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.411 [2024-12-09 05:06:31.818650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.411 [2024-12-09 05:06:31.819237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:55.411 [2024-12-09 05:06:31.946610] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:55.411 [2024-12-09 05:06:31.962780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:55.411 NULL1 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.411 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:55.411 [2024-12-09 05:06:32.017421] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:11:55.411 [2024-12-09 05:06:32.017466] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3529460 ] 00:11:55.980 Attached to nqn.2016-06.io.spdk:cnode1 00:11:55.980 Namespace ID: 1 size: 1GB 00:11:55.980 fused_ordering(0) 00:11:55.980 fused_ordering(1) 00:11:55.980 fused_ordering(2) 00:11:55.980 fused_ordering(3) 00:11:55.980 fused_ordering(4) 00:11:55.980 fused_ordering(5) 00:11:55.980 fused_ordering(6) 00:11:55.980 fused_ordering(7) 00:11:55.980 fused_ordering(8) 00:11:55.980 fused_ordering(9) 00:11:55.980 fused_ordering(10) 00:11:55.980 fused_ordering(11) 00:11:55.980 fused_ordering(12) 00:11:55.980 fused_ordering(13) 00:11:55.980 fused_ordering(14) 00:11:55.980 fused_ordering(15) 00:11:55.980 fused_ordering(16) 00:11:55.980 fused_ordering(17) 00:11:55.980 fused_ordering(18) 00:11:55.980 fused_ordering(19) 00:11:55.980 fused_ordering(20) 00:11:55.980 fused_ordering(21) 00:11:55.980 fused_ordering(22) 00:11:55.980 fused_ordering(23) 00:11:55.980 fused_ordering(24) 00:11:55.980 fused_ordering(25) 00:11:55.980 fused_ordering(26) 00:11:55.980 fused_ordering(27) 00:11:55.980 fused_ordering(28) 00:11:55.980 fused_ordering(29) 00:11:55.980 fused_ordering(30) 00:11:55.980 fused_ordering(31) 00:11:55.980 fused_ordering(32) 00:11:55.980 fused_ordering(33) 00:11:55.980 fused_ordering(34) 00:11:55.980 fused_ordering(35) 00:11:55.980 fused_ordering(36) 00:11:55.980 fused_ordering(37) 00:11:55.980 fused_ordering(38) 00:11:55.980 fused_ordering(39) 00:11:55.980 fused_ordering(40) 00:11:55.980 fused_ordering(41) 00:11:55.980 fused_ordering(42) 00:11:55.980 fused_ordering(43) 00:11:55.980 fused_ordering(44) 00:11:55.980 fused_ordering(45) 00:11:55.980 fused_ordering(46) 00:11:55.980 fused_ordering(47) 00:11:55.980 fused_ordering(48) 00:11:55.980 fused_ordering(49) 00:11:55.980 fused_ordering(50) 00:11:55.980 fused_ordering(51) 00:11:55.980 fused_ordering(52) 00:11:55.980 fused_ordering(53) 00:11:55.980 fused_ordering(54) 00:11:55.980 fused_ordering(55) 00:11:55.980 fused_ordering(56) 00:11:55.980 fused_ordering(57) 00:11:55.980 fused_ordering(58) 00:11:55.980 fused_ordering(59) 00:11:55.980 fused_ordering(60) 00:11:55.980 fused_ordering(61) 00:11:55.980 fused_ordering(62) 00:11:55.980 fused_ordering(63) 00:11:55.980 fused_ordering(64) 00:11:55.980 fused_ordering(65) 00:11:55.980 fused_ordering(66) 00:11:55.980 fused_ordering(67) 00:11:55.980 fused_ordering(68) 00:11:55.980 fused_ordering(69) 00:11:55.980 fused_ordering(70) 00:11:55.980 fused_ordering(71) 00:11:55.980 fused_ordering(72) 00:11:55.980 fused_ordering(73) 00:11:55.980 fused_ordering(74) 00:11:55.980 fused_ordering(75) 00:11:55.980 fused_ordering(76) 00:11:55.980 fused_ordering(77) 00:11:55.980 fused_ordering(78) 00:11:55.980 fused_ordering(79) 00:11:55.980 fused_ordering(80) 00:11:55.980 fused_ordering(81) 00:11:55.980 fused_ordering(82) 00:11:55.980 fused_ordering(83) 00:11:55.980 fused_ordering(84) 00:11:55.980 fused_ordering(85) 00:11:55.980 fused_ordering(86) 00:11:55.980 fused_ordering(87) 00:11:55.980 fused_ordering(88) 00:11:55.980 fused_ordering(89) 00:11:55.980 fused_ordering(90) 00:11:55.980 fused_ordering(91) 00:11:55.980 fused_ordering(92) 00:11:55.980 fused_ordering(93) 00:11:55.980 fused_ordering(94) 00:11:55.980 fused_ordering(95) 00:11:55.980 fused_ordering(96) 00:11:55.980 fused_ordering(97) 00:11:55.980 fused_ordering(98) 00:11:55.980 fused_ordering(99) 00:11:55.980 fused_ordering(100) 00:11:55.980 fused_ordering(101) 00:11:55.980 fused_ordering(102) 00:11:55.980 fused_ordering(103) 00:11:55.980 fused_ordering(104) 00:11:55.980 fused_ordering(105) 00:11:55.980 fused_ordering(106) 00:11:55.980 fused_ordering(107) 00:11:55.980 fused_ordering(108) 00:11:55.980 fused_ordering(109) 00:11:55.980 fused_ordering(110) 00:11:55.980 fused_ordering(111) 00:11:55.980 fused_ordering(112) 00:11:55.980 fused_ordering(113) 00:11:55.980 fused_ordering(114) 00:11:55.980 fused_ordering(115) 00:11:55.980 fused_ordering(116) 00:11:55.980 fused_ordering(117) 00:11:55.980 fused_ordering(118) 00:11:55.980 fused_ordering(119) 00:11:55.980 fused_ordering(120) 00:11:55.980 fused_ordering(121) 00:11:55.980 fused_ordering(122) 00:11:55.980 fused_ordering(123) 00:11:55.980 fused_ordering(124) 00:11:55.980 fused_ordering(125) 00:11:55.980 fused_ordering(126) 00:11:55.980 fused_ordering(127) 00:11:55.980 fused_ordering(128) 00:11:55.980 fused_ordering(129) 00:11:55.980 fused_ordering(130) 00:11:55.980 fused_ordering(131) 00:11:55.980 fused_ordering(132) 00:11:55.980 fused_ordering(133) 00:11:55.980 fused_ordering(134) 00:11:55.980 fused_ordering(135) 00:11:55.980 fused_ordering(136) 00:11:55.980 fused_ordering(137) 00:11:55.980 fused_ordering(138) 00:11:55.980 fused_ordering(139) 00:11:55.980 fused_ordering(140) 00:11:55.980 fused_ordering(141) 00:11:55.980 fused_ordering(142) 00:11:55.980 fused_ordering(143) 00:11:55.980 fused_ordering(144) 00:11:55.980 fused_ordering(145) 00:11:55.980 fused_ordering(146) 00:11:55.980 fused_ordering(147) 00:11:55.980 fused_ordering(148) 00:11:55.980 fused_ordering(149) 00:11:55.980 fused_ordering(150) 00:11:55.980 fused_ordering(151) 00:11:55.980 fused_ordering(152) 00:11:55.980 fused_ordering(153) 00:11:55.980 fused_ordering(154) 00:11:55.980 fused_ordering(155) 00:11:55.980 fused_ordering(156) 00:11:55.980 fused_ordering(157) 00:11:55.980 fused_ordering(158) 00:11:55.980 fused_ordering(159) 00:11:55.980 fused_ordering(160) 00:11:55.980 fused_ordering(161) 00:11:55.980 fused_ordering(162) 00:11:55.980 fused_ordering(163) 00:11:55.980 fused_ordering(164) 00:11:55.980 fused_ordering(165) 00:11:55.980 fused_ordering(166) 00:11:55.980 fused_ordering(167) 00:11:55.980 fused_ordering(168) 00:11:55.980 fused_ordering(169) 00:11:55.980 fused_ordering(170) 00:11:55.980 fused_ordering(171) 00:11:55.980 fused_ordering(172) 00:11:55.980 fused_ordering(173) 00:11:55.980 fused_ordering(174) 00:11:55.980 fused_ordering(175) 00:11:55.980 fused_ordering(176) 00:11:55.980 fused_ordering(177) 00:11:55.980 fused_ordering(178) 00:11:55.980 fused_ordering(179) 00:11:55.980 fused_ordering(180) 00:11:55.980 fused_ordering(181) 00:11:55.980 fused_ordering(182) 00:11:55.980 fused_ordering(183) 00:11:55.980 fused_ordering(184) 00:11:55.980 fused_ordering(185) 00:11:55.980 fused_ordering(186) 00:11:55.980 fused_ordering(187) 00:11:55.980 fused_ordering(188) 00:11:55.980 fused_ordering(189) 00:11:55.980 fused_ordering(190) 00:11:55.980 fused_ordering(191) 00:11:55.980 fused_ordering(192) 00:11:55.980 fused_ordering(193) 00:11:55.980 fused_ordering(194) 00:11:55.980 fused_ordering(195) 00:11:55.980 fused_ordering(196) 00:11:55.980 fused_ordering(197) 00:11:55.980 fused_ordering(198) 00:11:55.980 fused_ordering(199) 00:11:55.980 fused_ordering(200) 00:11:55.980 fused_ordering(201) 00:11:55.980 fused_ordering(202) 00:11:55.980 fused_ordering(203) 00:11:55.980 fused_ordering(204) 00:11:55.980 fused_ordering(205) 00:11:55.980 fused_ordering(206) 00:11:55.980 fused_ordering(207) 00:11:55.980 fused_ordering(208) 00:11:55.980 fused_ordering(209) 00:11:55.980 fused_ordering(210) 00:11:55.980 fused_ordering(211) 00:11:55.980 fused_ordering(212) 00:11:55.980 fused_ordering(213) 00:11:55.981 fused_ordering(214) 00:11:55.981 fused_ordering(215) 00:11:55.981 fused_ordering(216) 00:11:55.981 fused_ordering(217) 00:11:55.981 fused_ordering(218) 00:11:55.981 fused_ordering(219) 00:11:55.981 fused_ordering(220) 00:11:55.981 fused_ordering(221) 00:11:55.981 fused_ordering(222) 00:11:55.981 fused_ordering(223) 00:11:55.981 fused_ordering(224) 00:11:55.981 fused_ordering(225) 00:11:55.981 fused_ordering(226) 00:11:55.981 fused_ordering(227) 00:11:55.981 fused_ordering(228) 00:11:55.981 fused_ordering(229) 00:11:55.981 fused_ordering(230) 00:11:55.981 fused_ordering(231) 00:11:55.981 fused_ordering(232) 00:11:55.981 fused_ordering(233) 00:11:55.981 fused_ordering(234) 00:11:55.981 fused_ordering(235) 00:11:55.981 fused_ordering(236) 00:11:55.981 fused_ordering(237) 00:11:55.981 fused_ordering(238) 00:11:55.981 fused_ordering(239) 00:11:55.981 fused_ordering(240) 00:11:55.981 fused_ordering(241) 00:11:55.981 fused_ordering(242) 00:11:55.981 fused_ordering(243) 00:11:55.981 fused_ordering(244) 00:11:55.981 fused_ordering(245) 00:11:55.981 fused_ordering(246) 00:11:55.981 fused_ordering(247) 00:11:55.981 fused_ordering(248) 00:11:55.981 fused_ordering(249) 00:11:55.981 fused_ordering(250) 00:11:55.981 fused_ordering(251) 00:11:55.981 fused_ordering(252) 00:11:55.981 fused_ordering(253) 00:11:55.981 fused_ordering(254) 00:11:55.981 fused_ordering(255) 00:11:55.981 fused_ordering(256) 00:11:55.981 fused_ordering(257) 00:11:55.981 fused_ordering(258) 00:11:55.981 fused_ordering(259) 00:11:55.981 fused_ordering(260) 00:11:55.981 fused_ordering(261) 00:11:55.981 fused_ordering(262) 00:11:55.981 fused_ordering(263) 00:11:55.981 fused_ordering(264) 00:11:55.981 fused_ordering(265) 00:11:55.981 fused_ordering(266) 00:11:55.981 fused_ordering(267) 00:11:55.981 fused_ordering(268) 00:11:55.981 fused_ordering(269) 00:11:55.981 fused_ordering(270) 00:11:55.981 fused_ordering(271) 00:11:55.981 fused_ordering(272) 00:11:55.981 fused_ordering(273) 00:11:55.981 fused_ordering(274) 00:11:55.981 fused_ordering(275) 00:11:55.981 fused_ordering(276) 00:11:55.981 fused_ordering(277) 00:11:55.981 fused_ordering(278) 00:11:55.981 fused_ordering(279) 00:11:55.981 fused_ordering(280) 00:11:55.981 fused_ordering(281) 00:11:55.981 fused_ordering(282) 00:11:55.981 fused_ordering(283) 00:11:55.981 fused_ordering(284) 00:11:55.981 fused_ordering(285) 00:11:55.981 fused_ordering(286) 00:11:55.981 fused_ordering(287) 00:11:55.981 fused_ordering(288) 00:11:55.981 fused_ordering(289) 00:11:55.981 fused_ordering(290) 00:11:55.981 fused_ordering(291) 00:11:55.981 fused_ordering(292) 00:11:55.981 fused_ordering(293) 00:11:55.981 fused_ordering(294) 00:11:55.981 fused_ordering(295) 00:11:55.981 fused_ordering(296) 00:11:55.981 fused_ordering(297) 00:11:55.981 fused_ordering(298) 00:11:55.981 fused_ordering(299) 00:11:55.981 fused_ordering(300) 00:11:55.981 fused_ordering(301) 00:11:55.981 fused_ordering(302) 00:11:55.981 fused_ordering(303) 00:11:55.981 fused_ordering(304) 00:11:55.981 fused_ordering(305) 00:11:55.981 fused_ordering(306) 00:11:55.981 fused_ordering(307) 00:11:55.981 fused_ordering(308) 00:11:55.981 fused_ordering(309) 00:11:55.981 fused_ordering(310) 00:11:55.981 fused_ordering(311) 00:11:55.981 fused_ordering(312) 00:11:55.981 fused_ordering(313) 00:11:55.981 fused_ordering(314) 00:11:55.981 fused_ordering(315) 00:11:55.981 fused_ordering(316) 00:11:55.981 fused_ordering(317) 00:11:55.981 fused_ordering(318) 00:11:55.981 fused_ordering(319) 00:11:55.981 fused_ordering(320) 00:11:55.981 fused_ordering(321) 00:11:55.981 fused_ordering(322) 00:11:55.981 fused_ordering(323) 00:11:55.981 fused_ordering(324) 00:11:55.981 fused_ordering(325) 00:11:55.981 fused_ordering(326) 00:11:55.981 fused_ordering(327) 00:11:55.981 fused_ordering(328) 00:11:55.981 fused_ordering(329) 00:11:55.981 fused_ordering(330) 00:11:55.981 fused_ordering(331) 00:11:55.981 fused_ordering(332) 00:11:55.981 fused_ordering(333) 00:11:55.981 fused_ordering(334) 00:11:55.981 fused_ordering(335) 00:11:55.981 fused_ordering(336) 00:11:55.981 fused_ordering(337) 00:11:55.981 fused_ordering(338) 00:11:55.981 fused_ordering(339) 00:11:55.981 fused_ordering(340) 00:11:55.981 fused_ordering(341) 00:11:55.981 fused_ordering(342) 00:11:55.981 fused_ordering(343) 00:11:55.981 fused_ordering(344) 00:11:55.981 fused_ordering(345) 00:11:55.981 fused_ordering(346) 00:11:55.981 fused_ordering(347) 00:11:55.981 fused_ordering(348) 00:11:55.981 fused_ordering(349) 00:11:55.981 fused_ordering(350) 00:11:55.981 fused_ordering(351) 00:11:55.981 fused_ordering(352) 00:11:55.981 fused_ordering(353) 00:11:55.981 fused_ordering(354) 00:11:55.981 fused_ordering(355) 00:11:55.981 fused_ordering(356) 00:11:55.981 fused_ordering(357) 00:11:55.981 fused_ordering(358) 00:11:55.981 fused_ordering(359) 00:11:55.981 fused_ordering(360) 00:11:55.981 fused_ordering(361) 00:11:55.981 fused_ordering(362) 00:11:55.981 fused_ordering(363) 00:11:55.981 fused_ordering(364) 00:11:55.981 fused_ordering(365) 00:11:55.981 fused_ordering(366) 00:11:55.981 fused_ordering(367) 00:11:55.981 fused_ordering(368) 00:11:55.981 fused_ordering(369) 00:11:55.981 fused_ordering(370) 00:11:55.981 fused_ordering(371) 00:11:55.981 fused_ordering(372) 00:11:55.981 fused_ordering(373) 00:11:55.981 fused_ordering(374) 00:11:55.981 fused_ordering(375) 00:11:55.981 fused_ordering(376) 00:11:55.981 fused_ordering(377) 00:11:55.981 fused_ordering(378) 00:11:55.981 fused_ordering(379) 00:11:55.981 fused_ordering(380) 00:11:55.981 fused_ordering(381) 00:11:55.981 fused_ordering(382) 00:11:55.981 fused_ordering(383) 00:11:55.981 fused_ordering(384) 00:11:55.981 fused_ordering(385) 00:11:55.981 fused_ordering(386) 00:11:55.981 fused_ordering(387) 00:11:55.981 fused_ordering(388) 00:11:55.981 fused_ordering(389) 00:11:55.981 fused_ordering(390) 00:11:55.981 fused_ordering(391) 00:11:55.981 fused_ordering(392) 00:11:55.981 fused_ordering(393) 00:11:55.981 fused_ordering(394) 00:11:55.981 fused_ordering(395) 00:11:55.981 fused_ordering(396) 00:11:55.981 fused_ordering(397) 00:11:55.981 fused_ordering(398) 00:11:55.981 fused_ordering(399) 00:11:55.981 fused_ordering(400) 00:11:55.981 fused_ordering(401) 00:11:55.981 fused_ordering(402) 00:11:55.981 fused_ordering(403) 00:11:55.981 fused_ordering(404) 00:11:55.981 fused_ordering(405) 00:11:55.981 fused_ordering(406) 00:11:55.981 fused_ordering(407) 00:11:55.981 fused_ordering(408) 00:11:55.981 fused_ordering(409) 00:11:55.981 fused_ordering(410) 00:11:56.549 fused_ordering(411) 00:11:56.549 fused_ordering(412) 00:11:56.549 fused_ordering(413) 00:11:56.549 fused_ordering(414) 00:11:56.549 fused_ordering(415) 00:11:56.549 fused_ordering(416) 00:11:56.549 fused_ordering(417) 00:11:56.549 fused_ordering(418) 00:11:56.549 fused_ordering(419) 00:11:56.549 fused_ordering(420) 00:11:56.549 fused_ordering(421) 00:11:56.549 fused_ordering(422) 00:11:56.549 fused_ordering(423) 00:11:56.549 fused_ordering(424) 00:11:56.549 fused_ordering(425) 00:11:56.549 fused_ordering(426) 00:11:56.549 fused_ordering(427) 00:11:56.549 fused_ordering(428) 00:11:56.549 fused_ordering(429) 00:11:56.549 fused_ordering(430) 00:11:56.549 fused_ordering(431) 00:11:56.549 fused_ordering(432) 00:11:56.549 fused_ordering(433) 00:11:56.549 fused_ordering(434) 00:11:56.549 fused_ordering(435) 00:11:56.549 fused_ordering(436) 00:11:56.549 fused_ordering(437) 00:11:56.550 fused_ordering(438) 00:11:56.550 fused_ordering(439) 00:11:56.550 fused_ordering(440) 00:11:56.550 fused_ordering(441) 00:11:56.550 fused_ordering(442) 00:11:56.550 fused_ordering(443) 00:11:56.550 fused_ordering(444) 00:11:56.550 fused_ordering(445) 00:11:56.550 fused_ordering(446) 00:11:56.550 fused_ordering(447) 00:11:56.550 fused_ordering(448) 00:11:56.550 fused_ordering(449) 00:11:56.550 fused_ordering(450) 00:11:56.550 fused_ordering(451) 00:11:56.550 fused_ordering(452) 00:11:56.550 fused_ordering(453) 00:11:56.550 fused_ordering(454) 00:11:56.550 fused_ordering(455) 00:11:56.550 fused_ordering(456) 00:11:56.550 fused_ordering(457) 00:11:56.550 fused_ordering(458) 00:11:56.550 fused_ordering(459) 00:11:56.550 fused_ordering(460) 00:11:56.550 fused_ordering(461) 00:11:56.550 fused_ordering(462) 00:11:56.550 fused_ordering(463) 00:11:56.550 fused_ordering(464) 00:11:56.550 fused_ordering(465) 00:11:56.550 fused_ordering(466) 00:11:56.550 fused_ordering(467) 00:11:56.550 fused_ordering(468) 00:11:56.550 fused_ordering(469) 00:11:56.550 fused_ordering(470) 00:11:56.550 fused_ordering(471) 00:11:56.550 fused_ordering(472) 00:11:56.550 fused_ordering(473) 00:11:56.550 fused_ordering(474) 00:11:56.550 fused_ordering(475) 00:11:56.550 fused_ordering(476) 00:11:56.550 fused_ordering(477) 00:11:56.550 fused_ordering(478) 00:11:56.550 fused_ordering(479) 00:11:56.550 fused_ordering(480) 00:11:56.550 fused_ordering(481) 00:11:56.550 fused_ordering(482) 00:11:56.550 fused_ordering(483) 00:11:56.550 fused_ordering(484) 00:11:56.550 fused_ordering(485) 00:11:56.550 fused_ordering(486) 00:11:56.550 fused_ordering(487) 00:11:56.550 fused_ordering(488) 00:11:56.550 fused_ordering(489) 00:11:56.550 fused_ordering(490) 00:11:56.550 fused_ordering(491) 00:11:56.550 fused_ordering(492) 00:11:56.550 fused_ordering(493) 00:11:56.550 fused_ordering(494) 00:11:56.550 fused_ordering(495) 00:11:56.550 fused_ordering(496) 00:11:56.550 fused_ordering(497) 00:11:56.550 fused_ordering(498) 00:11:56.550 fused_ordering(499) 00:11:56.550 fused_ordering(500) 00:11:56.550 fused_ordering(501) 00:11:56.550 fused_ordering(502) 00:11:56.550 fused_ordering(503) 00:11:56.550 fused_ordering(504) 00:11:56.550 fused_ordering(505) 00:11:56.550 fused_ordering(506) 00:11:56.550 fused_ordering(507) 00:11:56.550 fused_ordering(508) 00:11:56.550 fused_ordering(509) 00:11:56.550 fused_ordering(510) 00:11:56.550 fused_ordering(511) 00:11:56.550 fused_ordering(512) 00:11:56.550 fused_ordering(513) 00:11:56.550 fused_ordering(514) 00:11:56.550 fused_ordering(515) 00:11:56.550 fused_ordering(516) 00:11:56.550 fused_ordering(517) 00:11:56.550 fused_ordering(518) 00:11:56.550 fused_ordering(519) 00:11:56.550 fused_ordering(520) 00:11:56.550 fused_ordering(521) 00:11:56.550 fused_ordering(522) 00:11:56.550 fused_ordering(523) 00:11:56.550 fused_ordering(524) 00:11:56.550 fused_ordering(525) 00:11:56.550 fused_ordering(526) 00:11:56.550 fused_ordering(527) 00:11:56.550 fused_ordering(528) 00:11:56.550 fused_ordering(529) 00:11:56.550 fused_ordering(530) 00:11:56.550 fused_ordering(531) 00:11:56.550 fused_ordering(532) 00:11:56.550 fused_ordering(533) 00:11:56.550 fused_ordering(534) 00:11:56.550 fused_ordering(535) 00:11:56.550 fused_ordering(536) 00:11:56.550 fused_ordering(537) 00:11:56.550 fused_ordering(538) 00:11:56.550 fused_ordering(539) 00:11:56.550 fused_ordering(540) 00:11:56.550 fused_ordering(541) 00:11:56.550 fused_ordering(542) 00:11:56.550 fused_ordering(543) 00:11:56.550 fused_ordering(544) 00:11:56.550 fused_ordering(545) 00:11:56.550 fused_ordering(546) 00:11:56.550 fused_ordering(547) 00:11:56.550 fused_ordering(548) 00:11:56.550 fused_ordering(549) 00:11:56.550 fused_ordering(550) 00:11:56.550 fused_ordering(551) 00:11:56.550 fused_ordering(552) 00:11:56.550 fused_ordering(553) 00:11:56.550 fused_ordering(554) 00:11:56.550 fused_ordering(555) 00:11:56.550 fused_ordering(556) 00:11:56.550 fused_ordering(557) 00:11:56.550 fused_ordering(558) 00:11:56.550 fused_ordering(559) 00:11:56.550 fused_ordering(560) 00:11:56.550 fused_ordering(561) 00:11:56.550 fused_ordering(562) 00:11:56.550 fused_ordering(563) 00:11:56.550 fused_ordering(564) 00:11:56.550 fused_ordering(565) 00:11:56.550 fused_ordering(566) 00:11:56.550 fused_ordering(567) 00:11:56.550 fused_ordering(568) 00:11:56.550 fused_ordering(569) 00:11:56.550 fused_ordering(570) 00:11:56.550 fused_ordering(571) 00:11:56.550 fused_ordering(572) 00:11:56.550 fused_ordering(573) 00:11:56.550 fused_ordering(574) 00:11:56.550 fused_ordering(575) 00:11:56.550 fused_ordering(576) 00:11:56.550 fused_ordering(577) 00:11:56.550 fused_ordering(578) 00:11:56.550 fused_ordering(579) 00:11:56.550 fused_ordering(580) 00:11:56.550 fused_ordering(581) 00:11:56.550 fused_ordering(582) 00:11:56.550 fused_ordering(583) 00:11:56.550 fused_ordering(584) 00:11:56.550 fused_ordering(585) 00:11:56.550 fused_ordering(586) 00:11:56.550 fused_ordering(587) 00:11:56.550 fused_ordering(588) 00:11:56.550 fused_ordering(589) 00:11:56.550 fused_ordering(590) 00:11:56.550 fused_ordering(591) 00:11:56.550 fused_ordering(592) 00:11:56.550 fused_ordering(593) 00:11:56.550 fused_ordering(594) 00:11:56.550 fused_ordering(595) 00:11:56.550 fused_ordering(596) 00:11:56.550 fused_ordering(597) 00:11:56.550 fused_ordering(598) 00:11:56.550 fused_ordering(599) 00:11:56.550 fused_ordering(600) 00:11:56.550 fused_ordering(601) 00:11:56.550 fused_ordering(602) 00:11:56.550 fused_ordering(603) 00:11:56.550 fused_ordering(604) 00:11:56.550 fused_ordering(605) 00:11:56.550 fused_ordering(606) 00:11:56.550 fused_ordering(607) 00:11:56.550 fused_ordering(608) 00:11:56.550 fused_ordering(609) 00:11:56.550 fused_ordering(610) 00:11:56.550 fused_ordering(611) 00:11:56.550 fused_ordering(612) 00:11:56.550 fused_ordering(613) 00:11:56.550 fused_ordering(614) 00:11:56.550 fused_ordering(615) 00:11:56.809 fused_ordering(616) 00:11:56.809 fused_ordering(617) 00:11:56.809 fused_ordering(618) 00:11:56.809 fused_ordering(619) 00:11:56.809 fused_ordering(620) 00:11:56.809 fused_ordering(621) 00:11:56.809 fused_ordering(622) 00:11:56.809 fused_ordering(623) 00:11:56.809 fused_ordering(624) 00:11:56.809 fused_ordering(625) 00:11:56.809 fused_ordering(626) 00:11:56.809 fused_ordering(627) 00:11:56.809 fused_ordering(628) 00:11:56.809 fused_ordering(629) 00:11:56.809 fused_ordering(630) 00:11:56.809 fused_ordering(631) 00:11:56.809 fused_ordering(632) 00:11:56.809 fused_ordering(633) 00:11:56.809 fused_ordering(634) 00:11:56.809 fused_ordering(635) 00:11:56.809 fused_ordering(636) 00:11:56.809 fused_ordering(637) 00:11:56.809 fused_ordering(638) 00:11:56.809 fused_ordering(639) 00:11:56.809 fused_ordering(640) 00:11:56.809 fused_ordering(641) 00:11:56.809 fused_ordering(642) 00:11:56.809 fused_ordering(643) 00:11:56.809 fused_ordering(644) 00:11:56.809 fused_ordering(645) 00:11:56.809 fused_ordering(646) 00:11:56.809 fused_ordering(647) 00:11:56.809 fused_ordering(648) 00:11:56.809 fused_ordering(649) 00:11:56.809 fused_ordering(650) 00:11:56.809 fused_ordering(651) 00:11:56.809 fused_ordering(652) 00:11:56.809 fused_ordering(653) 00:11:56.809 fused_ordering(654) 00:11:56.809 fused_ordering(655) 00:11:56.809 fused_ordering(656) 00:11:56.809 fused_ordering(657) 00:11:56.809 fused_ordering(658) 00:11:56.809 fused_ordering(659) 00:11:56.809 fused_ordering(660) 00:11:56.809 fused_ordering(661) 00:11:56.809 fused_ordering(662) 00:11:56.809 fused_ordering(663) 00:11:56.809 fused_ordering(664) 00:11:56.809 fused_ordering(665) 00:11:56.809 fused_ordering(666) 00:11:56.809 fused_ordering(667) 00:11:56.809 fused_ordering(668) 00:11:56.809 fused_ordering(669) 00:11:56.809 fused_ordering(670) 00:11:56.809 fused_ordering(671) 00:11:56.809 fused_ordering(672) 00:11:56.809 fused_ordering(673) 00:11:56.809 fused_ordering(674) 00:11:56.809 fused_ordering(675) 00:11:56.809 fused_ordering(676) 00:11:56.809 fused_ordering(677) 00:11:56.809 fused_ordering(678) 00:11:56.809 fused_ordering(679) 00:11:56.809 fused_ordering(680) 00:11:56.809 fused_ordering(681) 00:11:56.809 fused_ordering(682) 00:11:56.809 fused_ordering(683) 00:11:56.809 fused_ordering(684) 00:11:56.809 fused_ordering(685) 00:11:56.809 fused_ordering(686) 00:11:56.809 fused_ordering(687) 00:11:56.809 fused_ordering(688) 00:11:56.809 fused_ordering(689) 00:11:56.809 fused_ordering(690) 00:11:56.809 fused_ordering(691) 00:11:56.809 fused_ordering(692) 00:11:56.809 fused_ordering(693) 00:11:56.809 fused_ordering(694) 00:11:56.809 fused_ordering(695) 00:11:56.809 fused_ordering(696) 00:11:56.809 fused_ordering(697) 00:11:56.809 fused_ordering(698) 00:11:56.809 fused_ordering(699) 00:11:56.809 fused_ordering(700) 00:11:56.809 fused_ordering(701) 00:11:56.809 fused_ordering(702) 00:11:56.809 fused_ordering(703) 00:11:56.809 fused_ordering(704) 00:11:56.809 fused_ordering(705) 00:11:56.809 fused_ordering(706) 00:11:56.809 fused_ordering(707) 00:11:56.809 fused_ordering(708) 00:11:56.809 fused_ordering(709) 00:11:56.809 fused_ordering(710) 00:11:56.809 fused_ordering(711) 00:11:56.809 fused_ordering(712) 00:11:56.809 fused_ordering(713) 00:11:56.809 fused_ordering(714) 00:11:56.809 fused_ordering(715) 00:11:56.809 fused_ordering(716) 00:11:56.809 fused_ordering(717) 00:11:56.809 fused_ordering(718) 00:11:56.809 fused_ordering(719) 00:11:56.809 fused_ordering(720) 00:11:56.809 fused_ordering(721) 00:11:56.809 fused_ordering(722) 00:11:56.809 fused_ordering(723) 00:11:56.809 fused_ordering(724) 00:11:56.809 fused_ordering(725) 00:11:56.809 fused_ordering(726) 00:11:56.810 fused_ordering(727) 00:11:56.810 fused_ordering(728) 00:11:56.810 fused_ordering(729) 00:11:56.810 fused_ordering(730) 00:11:56.810 fused_ordering(731) 00:11:56.810 fused_ordering(732) 00:11:56.810 fused_ordering(733) 00:11:56.810 fused_ordering(734) 00:11:56.810 fused_ordering(735) 00:11:56.810 fused_ordering(736) 00:11:56.810 fused_ordering(737) 00:11:56.810 fused_ordering(738) 00:11:56.810 fused_ordering(739) 00:11:56.810 fused_ordering(740) 00:11:56.810 fused_ordering(741) 00:11:56.810 fused_ordering(742) 00:11:56.810 fused_ordering(743) 00:11:56.810 fused_ordering(744) 00:11:56.810 fused_ordering(745) 00:11:56.810 fused_ordering(746) 00:11:56.810 fused_ordering(747) 00:11:56.810 fused_ordering(748) 00:11:56.810 fused_ordering(749) 00:11:56.810 fused_ordering(750) 00:11:56.810 fused_ordering(751) 00:11:56.810 fused_ordering(752) 00:11:56.810 fused_ordering(753) 00:11:56.810 fused_ordering(754) 00:11:56.810 fused_ordering(755) 00:11:56.810 fused_ordering(756) 00:11:56.810 fused_ordering(757) 00:11:56.810 fused_ordering(758) 00:11:56.810 fused_ordering(759) 00:11:56.810 fused_ordering(760) 00:11:56.810 fused_ordering(761) 00:11:56.810 fused_ordering(762) 00:11:56.810 fused_ordering(763) 00:11:56.810 fused_ordering(764) 00:11:56.810 fused_ordering(765) 00:11:56.810 fused_ordering(766) 00:11:56.810 fused_ordering(767) 00:11:56.810 fused_ordering(768) 00:11:56.810 fused_ordering(769) 00:11:56.810 fused_ordering(770) 00:11:56.810 fused_ordering(771) 00:11:56.810 fused_ordering(772) 00:11:56.810 fused_ordering(773) 00:11:56.810 fused_ordering(774) 00:11:56.810 fused_ordering(775) 00:11:56.810 fused_ordering(776) 00:11:56.810 fused_ordering(777) 00:11:56.810 fused_ordering(778) 00:11:56.810 fused_ordering(779) 00:11:56.810 fused_ordering(780) 00:11:56.810 fused_ordering(781) 00:11:56.810 fused_ordering(782) 00:11:56.810 fused_ordering(783) 00:11:56.810 fused_ordering(784) 00:11:56.810 fused_ordering(785) 00:11:56.810 fused_ordering(786) 00:11:56.810 fused_ordering(787) 00:11:56.810 fused_ordering(788) 00:11:56.810 fused_ordering(789) 00:11:56.810 fused_ordering(790) 00:11:56.810 fused_ordering(791) 00:11:56.810 fused_ordering(792) 00:11:56.810 fused_ordering(793) 00:11:56.810 fused_ordering(794) 00:11:56.810 fused_ordering(795) 00:11:56.810 fused_ordering(796) 00:11:56.810 fused_ordering(797) 00:11:56.810 fused_ordering(798) 00:11:56.810 fused_ordering(799) 00:11:56.810 fused_ordering(800) 00:11:56.810 fused_ordering(801) 00:11:56.810 fused_ordering(802) 00:11:56.810 fused_ordering(803) 00:11:56.810 fused_ordering(804) 00:11:56.810 fused_ordering(805) 00:11:56.810 fused_ordering(806) 00:11:56.810 fused_ordering(807) 00:11:56.810 fused_ordering(808) 00:11:56.810 fused_ordering(809) 00:11:56.810 fused_ordering(810) 00:11:56.810 fused_ordering(811) 00:11:56.810 fused_ordering(812) 00:11:56.810 fused_ordering(813) 00:11:56.810 fused_ordering(814) 00:11:56.810 fused_ordering(815) 00:11:56.810 fused_ordering(816) 00:11:56.810 fused_ordering(817) 00:11:56.810 fused_ordering(818) 00:11:56.810 fused_ordering(819) 00:11:56.810 fused_ordering(820) 00:11:57.378 fused_ordering(821) 00:11:57.378 fused_ordering(822) 00:11:57.378 fused_ordering(823) 00:11:57.378 fused_ordering(824) 00:11:57.378 fused_ordering(825) 00:11:57.378 fused_ordering(826) 00:11:57.378 fused_ordering(827) 00:11:57.378 fused_ordering(828) 00:11:57.378 fused_ordering(829) 00:11:57.378 fused_ordering(830) 00:11:57.378 fused_ordering(831) 00:11:57.378 fused_ordering(832) 00:11:57.378 fused_ordering(833) 00:11:57.378 fused_ordering(834) 00:11:57.378 fused_ordering(835) 00:11:57.378 fused_ordering(836) 00:11:57.378 fused_ordering(837) 00:11:57.378 fused_ordering(838) 00:11:57.378 fused_ordering(839) 00:11:57.378 fused_ordering(840) 00:11:57.378 fused_ordering(841) 00:11:57.378 fused_ordering(842) 00:11:57.378 fused_ordering(843) 00:11:57.378 fused_ordering(844) 00:11:57.378 fused_ordering(845) 00:11:57.378 fused_ordering(846) 00:11:57.378 fused_ordering(847) 00:11:57.378 fused_ordering(848) 00:11:57.378 fused_ordering(849) 00:11:57.378 fused_ordering(850) 00:11:57.378 fused_ordering(851) 00:11:57.378 fused_ordering(852) 00:11:57.378 fused_ordering(853) 00:11:57.378 fused_ordering(854) 00:11:57.378 fused_ordering(855) 00:11:57.378 fused_ordering(856) 00:11:57.378 fused_ordering(857) 00:11:57.378 fused_ordering(858) 00:11:57.378 fused_ordering(859) 00:11:57.378 fused_ordering(860) 00:11:57.378 fused_ordering(861) 00:11:57.378 fused_ordering(862) 00:11:57.378 fused_ordering(863) 00:11:57.378 fused_ordering(864) 00:11:57.378 fused_ordering(865) 00:11:57.378 fused_ordering(866) 00:11:57.378 fused_ordering(867) 00:11:57.378 fused_ordering(868) 00:11:57.378 fused_ordering(869) 00:11:57.378 fused_ordering(870) 00:11:57.378 fused_ordering(871) 00:11:57.378 fused_ordering(872) 00:11:57.378 fused_ordering(873) 00:11:57.378 fused_ordering(874) 00:11:57.378 fused_ordering(875) 00:11:57.378 fused_ordering(876) 00:11:57.378 fused_ordering(877) 00:11:57.378 fused_ordering(878) 00:11:57.378 fused_ordering(879) 00:11:57.378 fused_ordering(880) 00:11:57.378 fused_ordering(881) 00:11:57.378 fused_ordering(882) 00:11:57.378 fused_ordering(883) 00:11:57.378 fused_ordering(884) 00:11:57.378 fused_ordering(885) 00:11:57.378 fused_ordering(886) 00:11:57.378 fused_ordering(887) 00:11:57.378 fused_ordering(888) 00:11:57.378 fused_ordering(889) 00:11:57.378 fused_ordering(890) 00:11:57.378 fused_ordering(891) 00:11:57.378 fused_ordering(892) 00:11:57.378 fused_ordering(893) 00:11:57.378 fused_ordering(894) 00:11:57.378 fused_ordering(895) 00:11:57.378 fused_ordering(896) 00:11:57.378 fused_ordering(897) 00:11:57.378 fused_ordering(898) 00:11:57.378 fused_ordering(899) 00:11:57.378 fused_ordering(900) 00:11:57.378 fused_ordering(901) 00:11:57.378 fused_ordering(902) 00:11:57.378 fused_ordering(903) 00:11:57.378 fused_ordering(904) 00:11:57.378 fused_ordering(905) 00:11:57.378 fused_ordering(906) 00:11:57.378 fused_ordering(907) 00:11:57.378 fused_ordering(908) 00:11:57.378 fused_ordering(909) 00:11:57.378 fused_ordering(910) 00:11:57.378 fused_ordering(911) 00:11:57.378 fused_ordering(912) 00:11:57.378 fused_ordering(913) 00:11:57.378 fused_ordering(914) 00:11:57.378 fused_ordering(915) 00:11:57.378 fused_ordering(916) 00:11:57.378 fused_ordering(917) 00:11:57.378 fused_ordering(918) 00:11:57.378 fused_ordering(919) 00:11:57.378 fused_ordering(920) 00:11:57.378 fused_ordering(921) 00:11:57.378 fused_ordering(922) 00:11:57.378 fused_ordering(923) 00:11:57.378 fused_ordering(924) 00:11:57.378 fused_ordering(925) 00:11:57.378 fused_ordering(926) 00:11:57.378 fused_ordering(927) 00:11:57.378 fused_ordering(928) 00:11:57.378 fused_ordering(929) 00:11:57.378 fused_ordering(930) 00:11:57.378 fused_ordering(931) 00:11:57.378 fused_ordering(932) 00:11:57.378 fused_ordering(933) 00:11:57.378 fused_ordering(934) 00:11:57.378 fused_ordering(935) 00:11:57.378 fused_ordering(936) 00:11:57.378 fused_ordering(937) 00:11:57.378 fused_ordering(938) 00:11:57.378 fused_ordering(939) 00:11:57.378 fused_ordering(940) 00:11:57.378 fused_ordering(941) 00:11:57.378 fused_ordering(942) 00:11:57.378 fused_ordering(943) 00:11:57.378 fused_ordering(944) 00:11:57.378 fused_ordering(945) 00:11:57.378 fused_ordering(946) 00:11:57.378 fused_ordering(947) 00:11:57.378 fused_ordering(948) 00:11:57.378 fused_ordering(949) 00:11:57.378 fused_ordering(950) 00:11:57.378 fused_ordering(951) 00:11:57.378 fused_ordering(952) 00:11:57.378 fused_ordering(953) 00:11:57.378 fused_ordering(954) 00:11:57.378 fused_ordering(955) 00:11:57.378 fused_ordering(956) 00:11:57.378 fused_ordering(957) 00:11:57.378 fused_ordering(958) 00:11:57.378 fused_ordering(959) 00:11:57.378 fused_ordering(960) 00:11:57.378 fused_ordering(961) 00:11:57.378 fused_ordering(962) 00:11:57.378 fused_ordering(963) 00:11:57.378 fused_ordering(964) 00:11:57.378 fused_ordering(965) 00:11:57.378 fused_ordering(966) 00:11:57.378 fused_ordering(967) 00:11:57.378 fused_ordering(968) 00:11:57.378 fused_ordering(969) 00:11:57.378 fused_ordering(970) 00:11:57.378 fused_ordering(971) 00:11:57.378 fused_ordering(972) 00:11:57.378 fused_ordering(973) 00:11:57.378 fused_ordering(974) 00:11:57.378 fused_ordering(975) 00:11:57.378 fused_ordering(976) 00:11:57.378 fused_ordering(977) 00:11:57.378 fused_ordering(978) 00:11:57.378 fused_ordering(979) 00:11:57.378 fused_ordering(980) 00:11:57.379 fused_ordering(981) 00:11:57.379 fused_ordering(982) 00:11:57.379 fused_ordering(983) 00:11:57.379 fused_ordering(984) 00:11:57.379 fused_ordering(985) 00:11:57.379 fused_ordering(986) 00:11:57.379 fused_ordering(987) 00:11:57.379 fused_ordering(988) 00:11:57.379 fused_ordering(989) 00:11:57.379 fused_ordering(990) 00:11:57.379 fused_ordering(991) 00:11:57.379 fused_ordering(992) 00:11:57.379 fused_ordering(993) 00:11:57.379 fused_ordering(994) 00:11:57.379 fused_ordering(995) 00:11:57.379 fused_ordering(996) 00:11:57.379 fused_ordering(997) 00:11:57.379 fused_ordering(998) 00:11:57.379 fused_ordering(999) 00:11:57.379 fused_ordering(1000) 00:11:57.379 fused_ordering(1001) 00:11:57.379 fused_ordering(1002) 00:11:57.379 fused_ordering(1003) 00:11:57.379 fused_ordering(1004) 00:11:57.379 fused_ordering(1005) 00:11:57.379 fused_ordering(1006) 00:11:57.379 fused_ordering(1007) 00:11:57.379 fused_ordering(1008) 00:11:57.379 fused_ordering(1009) 00:11:57.379 fused_ordering(1010) 00:11:57.379 fused_ordering(1011) 00:11:57.379 fused_ordering(1012) 00:11:57.379 fused_ordering(1013) 00:11:57.379 fused_ordering(1014) 00:11:57.379 fused_ordering(1015) 00:11:57.379 fused_ordering(1016) 00:11:57.379 fused_ordering(1017) 00:11:57.379 fused_ordering(1018) 00:11:57.379 fused_ordering(1019) 00:11:57.379 fused_ordering(1020) 00:11:57.379 fused_ordering(1021) 00:11:57.379 fused_ordering(1022) 00:11:57.379 fused_ordering(1023) 00:11:57.379 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:57.379 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:57.379 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:57.379 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:11:57.379 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:57.379 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:11:57.379 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:57.379 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:57.379 rmmod nvme_tcp 00:11:57.379 rmmod nvme_fabrics 00:11:57.379 rmmod nvme_keyring 00:11:57.379 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:57.379 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:11:57.379 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:11:57.379 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3529435 ']' 00:11:57.379 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3529435 00:11:57.379 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3529435 ']' 00:11:57.379 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3529435 00:11:57.379 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:11:57.379 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.379 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3529435 00:11:57.379 05:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:57.379 05:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:57.379 05:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3529435' 00:11:57.379 killing process with pid 3529435 00:11:57.379 05:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3529435 00:11:57.379 05:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3529435 00:11:57.637 05:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:57.637 05:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:57.637 05:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:57.637 05:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:11:57.637 05:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:11:57.637 05:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:57.637 05:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:11:57.637 05:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:57.637 05:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:57.637 05:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.637 05:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.637 05:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:00.170 00:12:00.170 real 0m10.251s 00:12:00.170 user 0m4.826s 00:12:00.170 sys 0m5.515s 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.170 ************************************ 00:12:00.170 END TEST nvmf_fused_ordering 00:12:00.170 ************************************ 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:00.170 ************************************ 00:12:00.170 START TEST nvmf_ns_masking 00:12:00.170 ************************************ 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:00.170 * Looking for test storage... 00:12:00.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:00.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.170 --rc genhtml_branch_coverage=1 00:12:00.170 --rc genhtml_function_coverage=1 00:12:00.170 --rc genhtml_legend=1 00:12:00.170 --rc geninfo_all_blocks=1 00:12:00.170 --rc geninfo_unexecuted_blocks=1 00:12:00.170 00:12:00.170 ' 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:00.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.170 --rc genhtml_branch_coverage=1 00:12:00.170 --rc genhtml_function_coverage=1 00:12:00.170 --rc genhtml_legend=1 00:12:00.170 --rc geninfo_all_blocks=1 00:12:00.170 --rc geninfo_unexecuted_blocks=1 00:12:00.170 00:12:00.170 ' 00:12:00.170 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:00.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.170 --rc genhtml_branch_coverage=1 00:12:00.170 --rc genhtml_function_coverage=1 00:12:00.170 --rc genhtml_legend=1 00:12:00.170 --rc geninfo_all_blocks=1 00:12:00.171 --rc geninfo_unexecuted_blocks=1 00:12:00.171 00:12:00.171 ' 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:00.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.171 --rc genhtml_branch_coverage=1 00:12:00.171 --rc genhtml_function_coverage=1 00:12:00.171 --rc genhtml_legend=1 00:12:00.171 --rc geninfo_all_blocks=1 00:12:00.171 --rc geninfo_unexecuted_blocks=1 00:12:00.171 00:12:00.171 ' 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:00.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=aafff4fe-1093-4af3-8d6a-83dfe237ecac 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=9000a71b-cc00-4fa6-9e2c-6dc73d4144aa 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ea6e1ffd-118d-40e4-98a7-110fdfb82351 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:00.171 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:05.449 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:05.449 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:05.449 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:05.449 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:05.449 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:05.449 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:05.449 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:05.449 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:05.449 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:05.449 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:05.449 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:05.449 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:05.449 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:05.449 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:05.449 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:05.449 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.449 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:05.450 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:05.450 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:05.450 Found net devices under 0000:86:00.0: cvl_0_0 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:05.450 Found net devices under 0000:86:00.1: cvl_0_1 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:05.450 05:06:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:05.450 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:05.450 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:05.450 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.450 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:05.709 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:05.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:12:05.709 00:12:05.709 --- 10.0.0.2 ping statistics --- 00:12:05.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.709 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:12:05.709 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:12:05.709 00:12:05.709 --- 10.0.0.1 ping statistics --- 00:12:05.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.709 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:12:05.709 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.709 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:05.709 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:05.709 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.709 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:05.709 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:05.709 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.709 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:05.709 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:05.709 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:05.709 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:05.709 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:05.710 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:05.710 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3533215 00:12:05.710 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3533215 00:12:05.710 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:05.710 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3533215 ']' 00:12:05.710 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.710 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.710 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.710 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.710 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:05.710 [2024-12-09 05:06:42.209796] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:12:05.710 [2024-12-09 05:06:42.209849] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.710 [2024-12-09 05:06:42.281167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.710 [2024-12-09 05:06:42.321202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.710 [2024-12-09 05:06:42.321239] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.710 [2024-12-09 05:06:42.321246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.710 [2024-12-09 05:06:42.321252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.710 [2024-12-09 05:06:42.321257] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.710 [2024-12-09 05:06:42.321819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.968 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.968 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:05.968 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:05.968 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:05.968 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:05.968 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.968 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:06.233 [2024-12-09 05:06:42.627231] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.233 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:06.233 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:06.233 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:06.233 Malloc1 00:12:06.233 05:06:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:06.492 Malloc2 00:12:06.492 05:06:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:06.751 05:06:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:07.009 05:06:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.009 [2024-12-09 05:06:43.617177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.009 05:06:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:07.009 05:06:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ea6e1ffd-118d-40e4-98a7-110fdfb82351 -a 10.0.0.2 -s 4420 -i 4 00:12:07.266 05:06:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:07.266 05:06:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:07.266 05:06:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.266 05:06:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:07.266 05:06:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:09.797 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:09.797 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:09.797 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.797 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:09.798 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.798 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:09.798 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:09.798 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:09.798 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:09.798 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:09.798 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:09.798 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:09.798 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:09.798 [ 0]:0x1 00:12:09.798 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:09.798 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:09.798 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b48c9a32d22b4a0a948c41c8141a0bbf 00:12:09.798 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b48c9a32d22b4a0a948c41c8141a0bbf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.798 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:09.798 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:09.798 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:09.798 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:09.798 [ 0]:0x1 00:12:09.798 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:09.798 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:09.798 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b48c9a32d22b4a0a948c41c8141a0bbf 00:12:09.798 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b48c9a32d22b4a0a948c41c8141a0bbf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.798 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:09.798 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:09.798 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:09.798 [ 1]:0x2 00:12:09.798 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:09.798 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:09.798 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0260ac5d37fb4ccf8f4e16bab752a7a0 00:12:09.798 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0260ac5d37fb4ccf8f4e16bab752a7a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.798 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:09.798 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.056 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.315 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:10.315 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:10.315 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ea6e1ffd-118d-40e4-98a7-110fdfb82351 -a 10.0.0.2 -s 4420 -i 4 00:12:10.574 05:06:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:10.574 05:06:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:10.574 05:06:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.574 05:06:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:10.574 05:06:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:10.574 05:06:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:13.108 [ 0]:0x2 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:13.108 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0260ac5d37fb4ccf8f4e16bab752a7a0 00:12:13.109 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0260ac5d37fb4ccf8f4e16bab752a7a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:13.109 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:13.109 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:13.109 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:13.109 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:13.109 [ 0]:0x1 00:12:13.109 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:13.109 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:13.109 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b48c9a32d22b4a0a948c41c8141a0bbf 00:12:13.109 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b48c9a32d22b4a0a948c41c8141a0bbf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:13.109 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:13.109 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:13.109 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:13.109 [ 1]:0x2 00:12:13.109 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:13.109 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:13.109 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0260ac5d37fb4ccf8f4e16bab752a7a0 00:12:13.109 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0260ac5d37fb4ccf8f4e16bab752a7a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:13.109 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:13.383 [ 0]:0x2 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0260ac5d37fb4ccf8f4e16bab752a7a0 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0260ac5d37fb4ccf8f4e16bab752a7a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.383 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:13.642 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:13.642 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ea6e1ffd-118d-40e4-98a7-110fdfb82351 -a 10.0.0.2 -s 4420 -i 4 00:12:13.642 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:13.642 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:13.642 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.642 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:13.642 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:13.642 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:16.198 [ 0]:0x1 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b48c9a32d22b4a0a948c41c8141a0bbf 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b48c9a32d22b4a0a948c41c8141a0bbf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:16.198 [ 1]:0x2 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0260ac5d37fb4ccf8f4e16bab752a7a0 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0260ac5d37fb4ccf8f4e16bab752a7a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:16.198 [ 0]:0x2 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0260ac5d37fb4ccf8f4e16bab752a7a0 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0260ac5d37fb4ccf8f4e16bab752a7a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:16.198 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:16.457 [2024-12-09 05:06:52.927641] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:16.457 request: 00:12:16.457 { 00:12:16.457 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:16.457 "nsid": 2, 00:12:16.457 "host": "nqn.2016-06.io.spdk:host1", 00:12:16.457 "method": "nvmf_ns_remove_host", 00:12:16.457 "req_id": 1 00:12:16.457 } 00:12:16.457 Got JSON-RPC error response 00:12:16.457 response: 00:12:16.457 { 00:12:16.457 "code": -32602, 00:12:16.457 "message": "Invalid parameters" 00:12:16.457 } 00:12:16.457 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:16.457 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:16.457 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:16.457 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:16.457 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:16.457 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:16.457 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:16.457 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:16.457 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.457 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:16.457 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.457 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:16.457 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:16.457 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:16.457 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:16.457 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:16.457 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:16.457 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:16.457 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:16.457 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:16.457 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:16.457 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:16.457 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:16.457 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:16.457 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:16.457 [ 0]:0x2 00:12:16.457 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:16.457 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:16.457 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0260ac5d37fb4ccf8f4e16bab752a7a0 00:12:16.457 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0260ac5d37fb4ccf8f4e16bab752a7a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:16.457 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:16.457 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:16.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.717 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3535231 00:12:16.717 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:16.717 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.717 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3535231 /var/tmp/host.sock 00:12:16.717 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3535231 ']' 00:12:16.717 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:16.717 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.717 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:16.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:16.717 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.717 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:16.717 [2024-12-09 05:06:53.162083] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:12:16.717 [2024-12-09 05:06:53.162131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3535231 ] 00:12:16.717 [2024-12-09 05:06:53.225821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.717 [2024-12-09 05:06:53.266605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.977 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.977 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:16.977 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.236 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:17.236 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid aafff4fe-1093-4af3-8d6a-83dfe237ecac 00:12:17.236 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:17.236 05:06:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g AAFFF4FE10934AF38D6A83DFE237ECAC -i 00:12:17.494 05:06:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 9000a71b-cc00-4fa6-9e2c-6dc73d4144aa 00:12:17.494 05:06:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:17.494 05:06:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 9000A71BCC004FA69E2C6DC73D4144AA -i 00:12:17.753 05:06:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:18.012 05:06:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:18.012 05:06:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:18.012 05:06:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:18.270 nvme0n1 00:12:18.270 05:06:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:18.270 05:06:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:18.837 nvme1n2 00:12:18.837 05:06:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:18.837 05:06:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:18.837 05:06:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:18.837 05:06:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:18.837 05:06:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:18.838 05:06:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:18.838 05:06:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:18.838 05:06:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:18.838 05:06:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:19.095 05:06:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ aafff4fe-1093-4af3-8d6a-83dfe237ecac == \a\a\f\f\f\4\f\e\-\1\0\9\3\-\4\a\f\3\-\8\d\6\a\-\8\3\d\f\e\2\3\7\e\c\a\c ]] 00:12:19.095 05:06:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:19.095 05:06:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:19.095 05:06:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:19.353 05:06:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 9000a71b-cc00-4fa6-9e2c-6dc73d4144aa == \9\0\0\0\a\7\1\b\-\c\c\0\0\-\4\f\a\6\-\9\e\2\c\-\6\d\c\7\3\d\4\1\4\4\a\a ]] 00:12:19.353 05:06:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.353 05:06:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:19.612 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid aafff4fe-1093-4af3-8d6a-83dfe237ecac 00:12:19.612 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:19.612 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g AAFFF4FE10934AF38D6A83DFE237ECAC 00:12:19.612 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:19.612 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g AAFFF4FE10934AF38D6A83DFE237ECAC 00:12:19.612 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:19.612 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.612 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:19.612 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.612 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:19.612 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.612 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:19.612 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:19.612 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g AAFFF4FE10934AF38D6A83DFE237ECAC 00:12:19.872 [2024-12-09 05:06:56.357124] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:19.872 [2024-12-09 05:06:56.357156] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:19.872 [2024-12-09 05:06:56.357164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.872 request: 00:12:19.872 { 00:12:19.872 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:19.872 "namespace": { 00:12:19.872 "bdev_name": "invalid", 00:12:19.872 "nsid": 1, 00:12:19.872 "nguid": "AAFFF4FE10934AF38D6A83DFE237ECAC", 00:12:19.872 "no_auto_visible": false, 00:12:19.872 "hide_metadata": false 00:12:19.872 }, 00:12:19.872 "method": "nvmf_subsystem_add_ns", 00:12:19.872 "req_id": 1 00:12:19.872 } 00:12:19.872 Got JSON-RPC error response 00:12:19.872 response: 00:12:19.872 { 00:12:19.872 "code": -32602, 00:12:19.872 "message": "Invalid parameters" 00:12:19.872 } 00:12:19.872 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:19.872 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:19.872 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:19.872 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:19.872 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid aafff4fe-1093-4af3-8d6a-83dfe237ecac 00:12:19.872 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:19.872 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g AAFFF4FE10934AF38D6A83DFE237ECAC -i 00:12:20.131 05:06:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:22.035 05:06:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:22.035 05:06:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:22.035 05:06:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:22.294 05:06:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:22.294 05:06:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3535231 00:12:22.294 05:06:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3535231 ']' 00:12:22.294 05:06:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3535231 00:12:22.294 05:06:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:22.294 05:06:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.294 05:06:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3535231 00:12:22.294 05:06:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:22.295 05:06:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:22.295 05:06:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3535231' 00:12:22.295 killing process with pid 3535231 00:12:22.295 05:06:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3535231 00:12:22.295 05:06:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3535231 00:12:22.554 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.813 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:22.813 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:22.813 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:22.813 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:22.813 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:22.813 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:22.813 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:22.813 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:22.813 rmmod nvme_tcp 00:12:22.813 rmmod nvme_fabrics 00:12:22.813 rmmod nvme_keyring 00:12:22.813 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:22.813 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:22.813 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:22.813 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3533215 ']' 00:12:22.813 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3533215 00:12:22.813 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3533215 ']' 00:12:22.813 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3533215 00:12:22.813 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:22.813 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.813 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3533215 00:12:23.071 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.071 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.071 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3533215' 00:12:23.071 killing process with pid 3533215 00:12:23.071 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3533215 00:12:23.071 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3533215 00:12:23.330 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:23.330 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:23.330 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:23.330 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:23.330 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:12:23.330 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:23.330 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:12:23.330 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:23.330 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:23.330 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.330 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.330 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.234 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:25.234 00:12:25.234 real 0m25.478s 00:12:25.234 user 0m30.599s 00:12:25.234 sys 0m6.662s 00:12:25.234 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.234 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:25.234 ************************************ 00:12:25.234 END TEST nvmf_ns_masking 00:12:25.234 ************************************ 00:12:25.234 05:07:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:25.234 05:07:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:25.234 05:07:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:25.234 05:07:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.234 05:07:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:25.234 ************************************ 00:12:25.234 START TEST nvmf_nvme_cli 00:12:25.234 ************************************ 00:12:25.234 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:25.494 * Looking for test storage... 00:12:25.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:25.494 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:25.494 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:12:25.494 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:25.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.494 --rc genhtml_branch_coverage=1 00:12:25.494 --rc genhtml_function_coverage=1 00:12:25.494 --rc genhtml_legend=1 00:12:25.494 --rc geninfo_all_blocks=1 00:12:25.494 --rc geninfo_unexecuted_blocks=1 00:12:25.494 00:12:25.494 ' 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:25.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.494 --rc genhtml_branch_coverage=1 00:12:25.494 --rc genhtml_function_coverage=1 00:12:25.494 --rc genhtml_legend=1 00:12:25.494 --rc geninfo_all_blocks=1 00:12:25.494 --rc geninfo_unexecuted_blocks=1 00:12:25.494 00:12:25.494 ' 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:25.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.494 --rc genhtml_branch_coverage=1 00:12:25.494 --rc genhtml_function_coverage=1 00:12:25.494 --rc genhtml_legend=1 00:12:25.494 --rc geninfo_all_blocks=1 00:12:25.494 --rc geninfo_unexecuted_blocks=1 00:12:25.494 00:12:25.494 ' 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:25.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.494 --rc genhtml_branch_coverage=1 00:12:25.494 --rc genhtml_function_coverage=1 00:12:25.494 --rc genhtml_legend=1 00:12:25.494 --rc geninfo_all_blocks=1 00:12:25.494 --rc geninfo_unexecuted_blocks=1 00:12:25.494 00:12:25.494 ' 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:25.494 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:25.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:12:25.495 05:07:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:30.773 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:30.773 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:30.773 Found net devices under 0000:86:00.0: cvl_0_0 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:30.773 Found net devices under 0000:86:00.1: cvl_0_1 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:30.773 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:30.774 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:30.774 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:30.774 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.774 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:30.774 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:30.774 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:30.774 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:31.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:12:31.033 00:12:31.033 --- 10.0.0.2 ping statistics --- 00:12:31.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.033 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:12:31.033 00:12:31.033 --- 10.0.0.1 ping statistics --- 00:12:31.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.033 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3539855 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3539855 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3539855 ']' 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.033 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.033 [2024-12-09 05:07:07.638287] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:12:31.033 [2024-12-09 05:07:07.638333] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.291 [2024-12-09 05:07:07.707296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.291 [2024-12-09 05:07:07.749236] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.291 [2024-12-09 05:07:07.749273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.291 [2024-12-09 05:07:07.749280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.291 [2024-12-09 05:07:07.749285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.291 [2024-12-09 05:07:07.749290] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.291 [2024-12-09 05:07:07.750763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.291 [2024-12-09 05:07:07.750858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.291 [2024-12-09 05:07:07.750925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.291 [2024-12-09 05:07:07.750926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.291 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.291 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:12:31.291 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:31.291 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:31.291 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.291 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.291 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:31.291 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.291 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.291 [2024-12-09 05:07:07.898035] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.291 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.291 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:31.291 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.291 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.549 Malloc0 00:12:31.549 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.549 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:31.549 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.549 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.549 Malloc1 00:12:31.549 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.549 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:31.549 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.549 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.549 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.549 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:31.549 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.549 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.549 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.549 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:31.549 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.549 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.549 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.549 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.549 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.549 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.549 [2024-12-09 05:07:07.994633] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.549 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.550 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:31.550 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.550 05:07:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.550 05:07:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.550 05:07:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:31.550 00:12:31.550 Discovery Log Number of Records 2, Generation counter 2 00:12:31.550 =====Discovery Log Entry 0====== 00:12:31.550 trtype: tcp 00:12:31.550 adrfam: ipv4 00:12:31.550 subtype: current discovery subsystem 00:12:31.550 treq: not required 00:12:31.550 portid: 0 00:12:31.550 trsvcid: 4420 00:12:31.550 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:31.550 traddr: 10.0.0.2 00:12:31.550 eflags: explicit discovery connections, duplicate discovery information 00:12:31.550 sectype: none 00:12:31.550 =====Discovery Log Entry 1====== 00:12:31.550 trtype: tcp 00:12:31.550 adrfam: ipv4 00:12:31.550 subtype: nvme subsystem 00:12:31.550 treq: not required 00:12:31.550 portid: 0 00:12:31.550 trsvcid: 4420 00:12:31.550 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:31.550 traddr: 10.0.0.2 00:12:31.550 eflags: none 00:12:31.550 sectype: none 00:12:31.550 05:07:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:31.550 05:07:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:31.550 05:07:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:31.550 05:07:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:31.550 05:07:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:31.550 05:07:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:31.550 05:07:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:31.550 05:07:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:31.550 05:07:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:31.550 05:07:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:31.550 05:07:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.921 05:07:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:32.921 05:07:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:12:32.921 05:07:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.921 05:07:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:32.921 05:07:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:32.921 05:07:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:12:34.817 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:34.817 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:34.817 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:34.817 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:34.817 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:34.817 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:12:34.817 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:34.817 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:34.817 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:34.817 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:34.817 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:34.817 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:34.817 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:34.817 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:34.817 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:34.817 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:34.817 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:34.817 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:34.817 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:34.817 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:34.817 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:12:34.817 /dev/nvme0n2 ]] 00:12:34.818 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:34.818 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:34.818 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:34.818 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:34.818 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:34.818 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:34.818 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:34.818 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:34.818 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:34.818 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:34.818 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:34.818 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:34.818 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:34.818 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:34.818 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:34.818 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:34.818 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.076 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.076 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:12:35.076 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:35.076 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.076 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:35.077 rmmod nvme_tcp 00:12:35.077 rmmod nvme_fabrics 00:12:35.077 rmmod nvme_keyring 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3539855 ']' 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3539855 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3539855 ']' 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3539855 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3539855 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3539855' 00:12:35.077 killing process with pid 3539855 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3539855 00:12:35.077 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3539855 00:12:35.336 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:35.336 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:35.336 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:35.336 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:12:35.336 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:12:35.336 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:35.336 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:12:35.336 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:35.336 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:35.336 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.336 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.336 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.872 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:37.872 00:12:37.872 real 0m12.121s 00:12:37.872 user 0m17.993s 00:12:37.872 sys 0m4.793s 00:12:37.872 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.872 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:37.872 ************************************ 00:12:37.872 END TEST nvmf_nvme_cli 00:12:37.872 ************************************ 00:12:37.872 05:07:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:12:37.872 05:07:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:37.872 05:07:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:37.872 05:07:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.873 ************************************ 00:12:37.873 START TEST nvmf_vfio_user 00:12:37.873 ************************************ 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:37.873 * Looking for test storage... 00:12:37.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:37.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.873 --rc genhtml_branch_coverage=1 00:12:37.873 --rc genhtml_function_coverage=1 00:12:37.873 --rc genhtml_legend=1 00:12:37.873 --rc geninfo_all_blocks=1 00:12:37.873 --rc geninfo_unexecuted_blocks=1 00:12:37.873 00:12:37.873 ' 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:37.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.873 --rc genhtml_branch_coverage=1 00:12:37.873 --rc genhtml_function_coverage=1 00:12:37.873 --rc genhtml_legend=1 00:12:37.873 --rc geninfo_all_blocks=1 00:12:37.873 --rc geninfo_unexecuted_blocks=1 00:12:37.873 00:12:37.873 ' 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:37.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.873 --rc genhtml_branch_coverage=1 00:12:37.873 --rc genhtml_function_coverage=1 00:12:37.873 --rc genhtml_legend=1 00:12:37.873 --rc geninfo_all_blocks=1 00:12:37.873 --rc geninfo_unexecuted_blocks=1 00:12:37.873 00:12:37.873 ' 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:37.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.873 --rc genhtml_branch_coverage=1 00:12:37.873 --rc genhtml_function_coverage=1 00:12:37.873 --rc genhtml_legend=1 00:12:37.873 --rc geninfo_all_blocks=1 00:12:37.873 --rc geninfo_unexecuted_blocks=1 00:12:37.873 00:12:37.873 ' 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.873 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:37.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3541017 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3541017' 00:12:37.874 Process pid: 3541017 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3541017 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3541017 ']' 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 [2024-12-09 05:07:14.298376] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:12:37.874 [2024-12-09 05:07:14.298424] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.874 [2024-12-09 05:07:14.364109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.874 [2024-12-09 05:07:14.407251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.874 [2024-12-09 05:07:14.407287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.874 [2024-12-09 05:07:14.407295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.874 [2024-12-09 05:07:14.407301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.874 [2024-12-09 05:07:14.407306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.874 [2024-12-09 05:07:14.408939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.874 [2024-12-09 05:07:14.409034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.874 [2024-12-09 05:07:14.409071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.874 [2024-12-09 05:07:14.409073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:12:37.874 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:39.250 05:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:39.250 05:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:39.250 05:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:39.250 05:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:39.250 05:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:39.250 05:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:39.508 Malloc1 00:12:39.508 05:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:39.768 05:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:39.768 05:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:40.027 05:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:40.027 05:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:40.027 05:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:40.286 Malloc2 00:12:40.286 05:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:40.545 05:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:40.545 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:40.803 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:40.803 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:40.803 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:40.803 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:40.803 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:40.803 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:40.803 [2024-12-09 05:07:17.396845] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:12:40.803 [2024-12-09 05:07:17.396875] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3541660 ] 00:12:40.803 [2024-12-09 05:07:17.442874] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:41.064 [2024-12-09 05:07:17.449265] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:41.064 [2024-12-09 05:07:17.449287] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe9655aa000 00:12:41.064 [2024-12-09 05:07:17.450259] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:41.064 [2024-12-09 05:07:17.451260] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:41.064 [2024-12-09 05:07:17.452265] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:41.064 [2024-12-09 05:07:17.453271] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:41.064 [2024-12-09 05:07:17.454272] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:41.064 [2024-12-09 05:07:17.455285] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:41.064 [2024-12-09 05:07:17.456291] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:41.064 [2024-12-09 05:07:17.457295] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:41.064 [2024-12-09 05:07:17.458297] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:41.064 [2024-12-09 05:07:17.458307] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe96559f000 00:12:41.064 [2024-12-09 05:07:17.459368] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:41.064 [2024-12-09 05:07:17.472302] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:41.064 [2024-12-09 05:07:17.472331] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:12:41.064 [2024-12-09 05:07:17.477417] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:41.064 [2024-12-09 05:07:17.477453] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:41.064 [2024-12-09 05:07:17.477524] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:12:41.064 [2024-12-09 05:07:17.477539] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:12:41.064 [2024-12-09 05:07:17.477544] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:12:41.064 [2024-12-09 05:07:17.478418] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:41.064 [2024-12-09 05:07:17.478429] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:12:41.064 [2024-12-09 05:07:17.478436] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:12:41.064 [2024-12-09 05:07:17.479426] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:41.064 [2024-12-09 05:07:17.479435] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:12:41.064 [2024-12-09 05:07:17.479441] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:41.064 [2024-12-09 05:07:17.480433] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:41.064 [2024-12-09 05:07:17.480442] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:41.064 [2024-12-09 05:07:17.481445] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:41.064 [2024-12-09 05:07:17.481452] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:41.064 [2024-12-09 05:07:17.481457] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:41.064 [2024-12-09 05:07:17.481463] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:41.064 [2024-12-09 05:07:17.481569] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:12:41.064 [2024-12-09 05:07:17.481573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:41.064 [2024-12-09 05:07:17.481577] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:41.064 [2024-12-09 05:07:17.482452] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:41.064 [2024-12-09 05:07:17.483453] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:41.064 [2024-12-09 05:07:17.484461] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:41.064 [2024-12-09 05:07:17.485462] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:41.064 [2024-12-09 05:07:17.485542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:41.064 [2024-12-09 05:07:17.486473] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:41.064 [2024-12-09 05:07:17.486480] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:41.064 [2024-12-09 05:07:17.486485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:41.064 [2024-12-09 05:07:17.486502] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:12:41.064 [2024-12-09 05:07:17.486513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:41.064 [2024-12-09 05:07:17.486531] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:41.064 [2024-12-09 05:07:17.486536] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:41.064 [2024-12-09 05:07:17.486540] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:41.064 [2024-12-09 05:07:17.486552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:41.064 [2024-12-09 05:07:17.486596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:41.064 [2024-12-09 05:07:17.486606] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:12:41.064 [2024-12-09 05:07:17.486611] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:12:41.064 [2024-12-09 05:07:17.486614] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:12:41.064 [2024-12-09 05:07:17.486619] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:41.064 [2024-12-09 05:07:17.486624] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:12:41.064 [2024-12-09 05:07:17.486628] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:12:41.064 [2024-12-09 05:07:17.486632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:12:41.064 [2024-12-09 05:07:17.486640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:41.064 [2024-12-09 05:07:17.486649] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:41.064 [2024-12-09 05:07:17.486661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:41.064 [2024-12-09 05:07:17.486673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.064 [2024-12-09 05:07:17.486681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.064 [2024-12-09 05:07:17.486688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.064 [2024-12-09 05:07:17.486695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.064 [2024-12-09 05:07:17.486700] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:41.064 [2024-12-09 05:07:17.486707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:41.064 [2024-12-09 05:07:17.486715] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:41.064 [2024-12-09 05:07:17.486727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:41.064 [2024-12-09 05:07:17.486732] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:12:41.065 [2024-12-09 05:07:17.486737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:41.065 [2024-12-09 05:07:17.486745] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:12:41.065 [2024-12-09 05:07:17.486750] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:41.065 [2024-12-09 05:07:17.486758] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:41.065 [2024-12-09 05:07:17.486767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:41.065 [2024-12-09 05:07:17.486818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:12:41.065 [2024-12-09 05:07:17.486826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:41.065 [2024-12-09 05:07:17.486832] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:41.065 [2024-12-09 05:07:17.486836] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:41.065 [2024-12-09 05:07:17.486839] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:41.065 [2024-12-09 05:07:17.486845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:41.065 [2024-12-09 05:07:17.486858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:41.065 [2024-12-09 05:07:17.486869] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:12:41.065 [2024-12-09 05:07:17.486877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:12:41.065 [2024-12-09 05:07:17.486884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:41.065 [2024-12-09 05:07:17.486890] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:41.065 [2024-12-09 05:07:17.486896] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:41.065 [2024-12-09 05:07:17.486899] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:41.065 [2024-12-09 05:07:17.486904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:41.065 [2024-12-09 05:07:17.486931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:41.065 [2024-12-09 05:07:17.486942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:41.065 [2024-12-09 05:07:17.486948] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:41.065 [2024-12-09 05:07:17.486954] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:41.065 [2024-12-09 05:07:17.486958] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:41.065 [2024-12-09 05:07:17.486961] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:41.065 [2024-12-09 05:07:17.486967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:41.065 [2024-12-09 05:07:17.486978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:41.065 [2024-12-09 05:07:17.486987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:41.065 [2024-12-09 05:07:17.486993] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:41.065 [2024-12-09 05:07:17.487006] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:12:41.065 [2024-12-09 05:07:17.487012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:41.065 [2024-12-09 05:07:17.487017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:41.065 [2024-12-09 05:07:17.487021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:12:41.065 [2024-12-09 05:07:17.487026] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:41.065 [2024-12-09 05:07:17.487030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:12:41.065 [2024-12-09 05:07:17.487035] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:12:41.065 [2024-12-09 05:07:17.487052] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:41.065 [2024-12-09 05:07:17.487063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:41.065 [2024-12-09 05:07:17.487073] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:41.065 [2024-12-09 05:07:17.487082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:41.065 [2024-12-09 05:07:17.487091] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:41.065 [2024-12-09 05:07:17.487099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:41.065 [2024-12-09 05:07:17.487109] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:41.065 [2024-12-09 05:07:17.487117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:41.065 [2024-12-09 05:07:17.487129] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:41.065 [2024-12-09 05:07:17.487133] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:41.065 [2024-12-09 05:07:17.487137] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:41.065 [2024-12-09 05:07:17.487140] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:41.065 [2024-12-09 05:07:17.487142] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:41.065 [2024-12-09 05:07:17.487148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:41.065 [2024-12-09 05:07:17.487155] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:41.065 [2024-12-09 05:07:17.487158] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:41.065 [2024-12-09 05:07:17.487162] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:41.065 [2024-12-09 05:07:17.487167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:41.065 [2024-12-09 05:07:17.487173] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:41.065 [2024-12-09 05:07:17.487177] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:41.065 [2024-12-09 05:07:17.487180] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:41.065 [2024-12-09 05:07:17.487186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:41.065 [2024-12-09 05:07:17.487192] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:41.065 [2024-12-09 05:07:17.487196] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:41.065 [2024-12-09 05:07:17.487199] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:41.065 [2024-12-09 05:07:17.487204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:41.065 [2024-12-09 05:07:17.487211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:41.065 [2024-12-09 05:07:17.487223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:41.065 [2024-12-09 05:07:17.487232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:41.065 [2024-12-09 05:07:17.487238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:41.065 ===================================================== 00:12:41.065 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:41.065 ===================================================== 00:12:41.065 Controller Capabilities/Features 00:12:41.065 ================================ 00:12:41.065 Vendor ID: 4e58 00:12:41.065 Subsystem Vendor ID: 4e58 00:12:41.065 Serial Number: SPDK1 00:12:41.065 Model Number: SPDK bdev Controller 00:12:41.065 Firmware Version: 25.01 00:12:41.065 Recommended Arb Burst: 6 00:12:41.065 IEEE OUI Identifier: 8d 6b 50 00:12:41.065 Multi-path I/O 00:12:41.065 May have multiple subsystem ports: Yes 00:12:41.065 May have multiple controllers: Yes 00:12:41.065 Associated with SR-IOV VF: No 00:12:41.065 Max Data Transfer Size: 131072 00:12:41.065 Max Number of Namespaces: 32 00:12:41.065 Max Number of I/O Queues: 127 00:12:41.065 NVMe Specification Version (VS): 1.3 00:12:41.065 NVMe Specification Version (Identify): 1.3 00:12:41.065 Maximum Queue Entries: 256 00:12:41.065 Contiguous Queues Required: Yes 00:12:41.065 Arbitration Mechanisms Supported 00:12:41.065 Weighted Round Robin: Not Supported 00:12:41.065 Vendor Specific: Not Supported 00:12:41.065 Reset Timeout: 15000 ms 00:12:41.065 Doorbell Stride: 4 bytes 00:12:41.065 NVM Subsystem Reset: Not Supported 00:12:41.065 Command Sets Supported 00:12:41.065 NVM Command Set: Supported 00:12:41.065 Boot Partition: Not Supported 00:12:41.065 Memory Page Size Minimum: 4096 bytes 00:12:41.065 Memory Page Size Maximum: 4096 bytes 00:12:41.065 Persistent Memory Region: Not Supported 00:12:41.066 Optional Asynchronous Events Supported 00:12:41.066 Namespace Attribute Notices: Supported 00:12:41.066 Firmware Activation Notices: Not Supported 00:12:41.066 ANA Change Notices: Not Supported 00:12:41.066 PLE Aggregate Log Change Notices: Not Supported 00:12:41.066 LBA Status Info Alert Notices: Not Supported 00:12:41.066 EGE Aggregate Log Change Notices: Not Supported 00:12:41.066 Normal NVM Subsystem Shutdown event: Not Supported 00:12:41.066 Zone Descriptor Change Notices: Not Supported 00:12:41.066 Discovery Log Change Notices: Not Supported 00:12:41.066 Controller Attributes 00:12:41.066 128-bit Host Identifier: Supported 00:12:41.066 Non-Operational Permissive Mode: Not Supported 00:12:41.066 NVM Sets: Not Supported 00:12:41.066 Read Recovery Levels: Not Supported 00:12:41.066 Endurance Groups: Not Supported 00:12:41.066 Predictable Latency Mode: Not Supported 00:12:41.066 Traffic Based Keep ALive: Not Supported 00:12:41.066 Namespace Granularity: Not Supported 00:12:41.066 SQ Associations: Not Supported 00:12:41.066 UUID List: Not Supported 00:12:41.066 Multi-Domain Subsystem: Not Supported 00:12:41.066 Fixed Capacity Management: Not Supported 00:12:41.066 Variable Capacity Management: Not Supported 00:12:41.066 Delete Endurance Group: Not Supported 00:12:41.066 Delete NVM Set: Not Supported 00:12:41.066 Extended LBA Formats Supported: Not Supported 00:12:41.066 Flexible Data Placement Supported: Not Supported 00:12:41.066 00:12:41.066 Controller Memory Buffer Support 00:12:41.066 ================================ 00:12:41.066 Supported: No 00:12:41.066 00:12:41.066 Persistent Memory Region Support 00:12:41.066 ================================ 00:12:41.066 Supported: No 00:12:41.066 00:12:41.066 Admin Command Set Attributes 00:12:41.066 ============================ 00:12:41.066 Security Send/Receive: Not Supported 00:12:41.066 Format NVM: Not Supported 00:12:41.066 Firmware Activate/Download: Not Supported 00:12:41.066 Namespace Management: Not Supported 00:12:41.066 Device Self-Test: Not Supported 00:12:41.066 Directives: Not Supported 00:12:41.066 NVMe-MI: Not Supported 00:12:41.066 Virtualization Management: Not Supported 00:12:41.066 Doorbell Buffer Config: Not Supported 00:12:41.066 Get LBA Status Capability: Not Supported 00:12:41.066 Command & Feature Lockdown Capability: Not Supported 00:12:41.066 Abort Command Limit: 4 00:12:41.066 Async Event Request Limit: 4 00:12:41.066 Number of Firmware Slots: N/A 00:12:41.066 Firmware Slot 1 Read-Only: N/A 00:12:41.066 Firmware Activation Without Reset: N/A 00:12:41.066 Multiple Update Detection Support: N/A 00:12:41.066 Firmware Update Granularity: No Information Provided 00:12:41.066 Per-Namespace SMART Log: No 00:12:41.066 Asymmetric Namespace Access Log Page: Not Supported 00:12:41.066 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:41.066 Command Effects Log Page: Supported 00:12:41.066 Get Log Page Extended Data: Supported 00:12:41.066 Telemetry Log Pages: Not Supported 00:12:41.066 Persistent Event Log Pages: Not Supported 00:12:41.066 Supported Log Pages Log Page: May Support 00:12:41.066 Commands Supported & Effects Log Page: Not Supported 00:12:41.066 Feature Identifiers & Effects Log Page:May Support 00:12:41.066 NVMe-MI Commands & Effects Log Page: May Support 00:12:41.066 Data Area 4 for Telemetry Log: Not Supported 00:12:41.066 Error Log Page Entries Supported: 128 00:12:41.066 Keep Alive: Supported 00:12:41.066 Keep Alive Granularity: 10000 ms 00:12:41.066 00:12:41.066 NVM Command Set Attributes 00:12:41.066 ========================== 00:12:41.066 Submission Queue Entry Size 00:12:41.066 Max: 64 00:12:41.066 Min: 64 00:12:41.066 Completion Queue Entry Size 00:12:41.066 Max: 16 00:12:41.066 Min: 16 00:12:41.066 Number of Namespaces: 32 00:12:41.066 Compare Command: Supported 00:12:41.066 Write Uncorrectable Command: Not Supported 00:12:41.066 Dataset Management Command: Supported 00:12:41.066 Write Zeroes Command: Supported 00:12:41.066 Set Features Save Field: Not Supported 00:12:41.066 Reservations: Not Supported 00:12:41.066 Timestamp: Not Supported 00:12:41.066 Copy: Supported 00:12:41.066 Volatile Write Cache: Present 00:12:41.066 Atomic Write Unit (Normal): 1 00:12:41.066 Atomic Write Unit (PFail): 1 00:12:41.066 Atomic Compare & Write Unit: 1 00:12:41.066 Fused Compare & Write: Supported 00:12:41.066 Scatter-Gather List 00:12:41.066 SGL Command Set: Supported (Dword aligned) 00:12:41.066 SGL Keyed: Not Supported 00:12:41.066 SGL Bit Bucket Descriptor: Not Supported 00:12:41.066 SGL Metadata Pointer: Not Supported 00:12:41.066 Oversized SGL: Not Supported 00:12:41.066 SGL Metadata Address: Not Supported 00:12:41.066 SGL Offset: Not Supported 00:12:41.066 Transport SGL Data Block: Not Supported 00:12:41.066 Replay Protected Memory Block: Not Supported 00:12:41.066 00:12:41.066 Firmware Slot Information 00:12:41.066 ========================= 00:12:41.066 Active slot: 1 00:12:41.066 Slot 1 Firmware Revision: 25.01 00:12:41.066 00:12:41.066 00:12:41.066 Commands Supported and Effects 00:12:41.066 ============================== 00:12:41.066 Admin Commands 00:12:41.066 -------------- 00:12:41.066 Get Log Page (02h): Supported 00:12:41.066 Identify (06h): Supported 00:12:41.066 Abort (08h): Supported 00:12:41.066 Set Features (09h): Supported 00:12:41.066 Get Features (0Ah): Supported 00:12:41.066 Asynchronous Event Request (0Ch): Supported 00:12:41.066 Keep Alive (18h): Supported 00:12:41.066 I/O Commands 00:12:41.066 ------------ 00:12:41.066 Flush (00h): Supported LBA-Change 00:12:41.066 Write (01h): Supported LBA-Change 00:12:41.066 Read (02h): Supported 00:12:41.066 Compare (05h): Supported 00:12:41.066 Write Zeroes (08h): Supported LBA-Change 00:12:41.066 Dataset Management (09h): Supported LBA-Change 00:12:41.066 Copy (19h): Supported LBA-Change 00:12:41.066 00:12:41.066 Error Log 00:12:41.066 ========= 00:12:41.066 00:12:41.066 Arbitration 00:12:41.066 =========== 00:12:41.066 Arbitration Burst: 1 00:12:41.066 00:12:41.066 Power Management 00:12:41.066 ================ 00:12:41.066 Number of Power States: 1 00:12:41.066 Current Power State: Power State #0 00:12:41.066 Power State #0: 00:12:41.066 Max Power: 0.00 W 00:12:41.066 Non-Operational State: Operational 00:12:41.066 Entry Latency: Not Reported 00:12:41.066 Exit Latency: Not Reported 00:12:41.066 Relative Read Throughput: 0 00:12:41.066 Relative Read Latency: 0 00:12:41.066 Relative Write Throughput: 0 00:12:41.066 Relative Write Latency: 0 00:12:41.066 Idle Power: Not Reported 00:12:41.066 Active Power: Not Reported 00:12:41.066 Non-Operational Permissive Mode: Not Supported 00:12:41.066 00:12:41.066 Health Information 00:12:41.066 ================== 00:12:41.066 Critical Warnings: 00:12:41.066 Available Spare Space: OK 00:12:41.066 Temperature: OK 00:12:41.066 Device Reliability: OK 00:12:41.066 Read Only: No 00:12:41.066 Volatile Memory Backup: OK 00:12:41.066 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:41.066 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:41.066 Available Spare: 0% 00:12:41.066 Available Sp[2024-12-09 05:07:17.487325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:41.066 [2024-12-09 05:07:17.487337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:41.066 [2024-12-09 05:07:17.487364] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:12:41.066 [2024-12-09 05:07:17.487372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.066 [2024-12-09 05:07:17.487380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.066 [2024-12-09 05:07:17.487385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.066 [2024-12-09 05:07:17.487391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.066 [2024-12-09 05:07:17.491006] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:41.066 [2024-12-09 05:07:17.491018] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:41.066 [2024-12-09 05:07:17.491499] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:41.066 [2024-12-09 05:07:17.491547] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:12:41.066 [2024-12-09 05:07:17.491553] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:12:41.067 [2024-12-09 05:07:17.492507] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:41.067 [2024-12-09 05:07:17.492517] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:12:41.067 [2024-12-09 05:07:17.492565] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:41.067 [2024-12-09 05:07:17.494540] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:41.067 are Threshold: 0% 00:12:41.067 Life Percentage Used: 0% 00:12:41.067 Data Units Read: 0 00:12:41.067 Data Units Written: 0 00:12:41.067 Host Read Commands: 0 00:12:41.067 Host Write Commands: 0 00:12:41.067 Controller Busy Time: 0 minutes 00:12:41.067 Power Cycles: 0 00:12:41.067 Power On Hours: 0 hours 00:12:41.067 Unsafe Shutdowns: 0 00:12:41.067 Unrecoverable Media Errors: 0 00:12:41.067 Lifetime Error Log Entries: 0 00:12:41.067 Warning Temperature Time: 0 minutes 00:12:41.067 Critical Temperature Time: 0 minutes 00:12:41.067 00:12:41.067 Number of Queues 00:12:41.067 ================ 00:12:41.067 Number of I/O Submission Queues: 127 00:12:41.067 Number of I/O Completion Queues: 127 00:12:41.067 00:12:41.067 Active Namespaces 00:12:41.067 ================= 00:12:41.067 Namespace ID:1 00:12:41.067 Error Recovery Timeout: Unlimited 00:12:41.067 Command Set Identifier: NVM (00h) 00:12:41.067 Deallocate: Supported 00:12:41.067 Deallocated/Unwritten Error: Not Supported 00:12:41.067 Deallocated Read Value: Unknown 00:12:41.067 Deallocate in Write Zeroes: Not Supported 00:12:41.067 Deallocated Guard Field: 0xFFFF 00:12:41.067 Flush: Supported 00:12:41.067 Reservation: Supported 00:12:41.067 Namespace Sharing Capabilities: Multiple Controllers 00:12:41.067 Size (in LBAs): 131072 (0GiB) 00:12:41.067 Capacity (in LBAs): 131072 (0GiB) 00:12:41.067 Utilization (in LBAs): 131072 (0GiB) 00:12:41.067 NGUID: 8E32825E592841CC946DEC0AAB4904B8 00:12:41.067 UUID: 8e32825e-5928-41cc-946d-ec0aab4904b8 00:12:41.067 Thin Provisioning: Not Supported 00:12:41.067 Per-NS Atomic Units: Yes 00:12:41.067 Atomic Boundary Size (Normal): 0 00:12:41.067 Atomic Boundary Size (PFail): 0 00:12:41.067 Atomic Boundary Offset: 0 00:12:41.067 Maximum Single Source Range Length: 65535 00:12:41.067 Maximum Copy Length: 65535 00:12:41.067 Maximum Source Range Count: 1 00:12:41.067 NGUID/EUI64 Never Reused: No 00:12:41.067 Namespace Write Protected: No 00:12:41.067 Number of LBA Formats: 1 00:12:41.067 Current LBA Format: LBA Format #00 00:12:41.067 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:41.067 00:12:41.067 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:41.326 [2024-12-09 05:07:17.814902] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:46.592 Initializing NVMe Controllers 00:12:46.592 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:46.592 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:46.592 Initialization complete. Launching workers. 00:12:46.592 ======================================================== 00:12:46.592 Latency(us) 00:12:46.592 Device Information : IOPS MiB/s Average min max 00:12:46.592 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39902.01 155.87 3207.69 1002.42 7556.77 00:12:46.593 ======================================================== 00:12:46.593 Total : 39902.01 155.87 3207.69 1002.42 7556.77 00:12:46.593 00:12:46.593 [2024-12-09 05:07:22.837310] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:46.593 05:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:46.593 [2024-12-09 05:07:23.155650] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:51.995 Initializing NVMe Controllers 00:12:51.995 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:51.995 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:51.995 Initialization complete. Launching workers. 00:12:51.995 ======================================================== 00:12:51.995 Latency(us) 00:12:51.995 Device Information : IOPS MiB/s Average min max 00:12:51.995 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16042.81 62.67 7977.97 6973.07 8982.75 00:12:51.995 ======================================================== 00:12:51.995 Total : 16042.81 62.67 7977.97 6973.07 8982.75 00:12:51.995 00:12:51.995 [2024-12-09 05:07:28.187461] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:51.995 05:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:51.995 [2024-12-09 05:07:28.485686] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:57.272 [2024-12-09 05:07:33.564299] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:57.272 Initializing NVMe Controllers 00:12:57.272 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:57.272 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:57.272 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:57.272 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:57.272 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:57.272 Initialization complete. Launching workers. 00:12:57.272 Starting thread on core 2 00:12:57.272 Starting thread on core 3 00:12:57.272 Starting thread on core 1 00:12:57.272 05:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:57.531 [2024-12-09 05:07:33.948398] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:00.821 [2024-12-09 05:07:37.212215] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:00.821 Initializing NVMe Controllers 00:13:00.821 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:00.821 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:00.821 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:00.821 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:00.821 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:00.821 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:00.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:00.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:00.821 Initialization complete. Launching workers. 00:13:00.821 Starting thread on core 1 with urgent priority queue 00:13:00.821 Starting thread on core 2 with urgent priority queue 00:13:00.821 Starting thread on core 3 with urgent priority queue 00:13:00.821 Starting thread on core 0 with urgent priority queue 00:13:00.821 SPDK bdev Controller (SPDK1 ) core 0: 4793.67 IO/s 20.86 secs/100000 ios 00:13:00.821 SPDK bdev Controller (SPDK1 ) core 1: 5510.33 IO/s 18.15 secs/100000 ios 00:13:00.821 SPDK bdev Controller (SPDK1 ) core 2: 4159.67 IO/s 24.04 secs/100000 ios 00:13:00.821 SPDK bdev Controller (SPDK1 ) core 3: 4973.00 IO/s 20.11 secs/100000 ios 00:13:00.821 ======================================================== 00:13:00.821 00:13:00.821 05:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:01.080 [2024-12-09 05:07:37.575605] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:01.080 Initializing NVMe Controllers 00:13:01.080 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:01.080 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:01.080 Namespace ID: 1 size: 0GB 00:13:01.080 Initialization complete. 00:13:01.080 INFO: using host memory buffer for IO 00:13:01.080 Hello world! 00:13:01.080 [2024-12-09 05:07:37.609861] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:01.080 05:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:01.338 [2024-12-09 05:07:37.980448] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:02.711 Initializing NVMe Controllers 00:13:02.711 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:02.711 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:02.711 Initialization complete. Launching workers. 00:13:02.711 submit (in ns) avg, min, max = 6500.8, 3261.7, 4001731.3 00:13:02.711 complete (in ns) avg, min, max = 22291.8, 1792.2, 4000660.0 00:13:02.711 00:13:02.711 Submit histogram 00:13:02.711 ================ 00:13:02.711 Range in us Cumulative Count 00:13:02.711 3.256 - 3.270: 0.0125% ( 2) 00:13:02.711 3.270 - 3.283: 0.0627% ( 8) 00:13:02.711 3.283 - 3.297: 0.2005% ( 22) 00:13:02.711 3.297 - 3.311: 0.5076% ( 49) 00:13:02.711 3.311 - 3.325: 1.0214% ( 82) 00:13:02.711 3.325 - 3.339: 2.6067% ( 253) 00:13:02.711 3.339 - 3.353: 6.5668% ( 632) 00:13:02.711 3.353 - 3.367: 12.1812% ( 896) 00:13:02.711 3.367 - 3.381: 18.4661% ( 1003) 00:13:02.711 3.381 - 3.395: 25.2835% ( 1088) 00:13:02.711 3.395 - 3.409: 31.2990% ( 960) 00:13:02.711 3.409 - 3.423: 36.4183% ( 817) 00:13:02.711 3.423 - 3.437: 42.1894% ( 921) 00:13:02.711 3.437 - 3.450: 46.7385% ( 726) 00:13:02.711 3.450 - 3.464: 50.8929% ( 663) 00:13:02.711 3.464 - 3.478: 55.4671% ( 730) 00:13:02.711 3.478 - 3.492: 61.7144% ( 997) 00:13:02.711 3.492 - 3.506: 68.1434% ( 1026) 00:13:02.711 3.506 - 3.520: 72.2288% ( 652) 00:13:02.711 3.520 - 3.534: 76.8031% ( 730) 00:13:02.711 3.534 - 3.548: 81.6091% ( 767) 00:13:02.711 3.548 - 3.562: 84.3098% ( 431) 00:13:02.711 3.562 - 3.590: 86.8475% ( 405) 00:13:02.711 3.590 - 3.617: 87.5493% ( 112) 00:13:02.711 3.617 - 3.645: 88.4579% ( 145) 00:13:02.711 3.645 - 3.673: 89.9743% ( 242) 00:13:02.711 3.673 - 3.701: 91.8416% ( 298) 00:13:02.711 3.701 - 3.729: 93.4582% ( 258) 00:13:02.711 3.729 - 3.757: 95.3944% ( 309) 00:13:02.711 3.757 - 3.784: 97.0612% ( 266) 00:13:02.711 3.784 - 3.812: 98.1954% ( 181) 00:13:02.711 3.812 - 3.840: 98.9034% ( 113) 00:13:02.711 3.840 - 3.868: 99.3358% ( 69) 00:13:02.711 3.868 - 3.896: 99.4987% ( 26) 00:13:02.711 3.896 - 3.923: 99.5802% ( 13) 00:13:02.711 3.923 - 3.951: 99.5990% ( 3) 00:13:02.711 3.951 - 3.979: 99.6052% ( 1) 00:13:02.711 5.426 - 5.454: 99.6115% ( 1) 00:13:02.711 5.482 - 5.510: 99.6240% ( 2) 00:13:02.711 5.510 - 5.537: 99.6303% ( 1) 00:13:02.711 5.537 - 5.565: 99.6491% ( 3) 00:13:02.711 5.565 - 5.593: 99.6554% ( 1) 00:13:02.711 5.621 - 5.649: 99.6616% ( 1) 00:13:02.711 5.816 - 5.843: 99.6742% ( 2) 00:13:02.711 5.899 - 5.927: 99.6930% ( 3) 00:13:02.711 5.983 - 6.010: 99.7055% ( 2) 00:13:02.711 6.010 - 6.038: 99.7118% ( 1) 00:13:02.711 6.066 - 6.094: 99.7180% ( 1) 00:13:02.711 6.122 - 6.150: 99.7243% ( 1) 00:13:02.711 6.400 - 6.428: 99.7306% ( 1) 00:13:02.711 6.456 - 6.483: 99.7368% ( 1) 00:13:02.711 6.706 - 6.734: 99.7431% ( 1) 00:13:02.711 6.957 - 6.984: 99.7494% ( 1) 00:13:02.711 7.012 - 7.040: 99.7556% ( 1) 00:13:02.711 7.068 - 7.096: 99.7619% ( 1) 00:13:02.711 7.096 - 7.123: 99.7682% ( 1) 00:13:02.711 7.179 - 7.235: 99.7744% ( 1) 00:13:02.711 7.346 - 7.402: 99.7932% ( 3) 00:13:02.711 7.402 - 7.457: 99.7995% ( 1) 00:13:02.711 7.457 - 7.513: 99.8058% ( 1) 00:13:02.711 7.569 - 7.624: 99.8120% ( 1) 00:13:02.711 7.791 - 7.847: 99.8246% ( 2) 00:13:02.711 8.125 - 8.181: 99.8308% ( 1) 00:13:02.711 8.292 - 8.348: 99.8433% ( 2) 00:13:02.711 8.403 - 8.459: 99.8496% ( 1) 00:13:02.711 8.459 - 8.515: 99.8559% ( 1) 00:13:02.711 8.960 - 9.016: 99.8621% ( 1) 00:13:02.711 9.071 - 9.127: 99.8747% ( 2) 00:13:02.711 9.294 - 9.350: 99.8809% ( 1) 00:13:02.711 9.628 - 9.683: 99.8872% ( 1) 00:13:02.711 9.962 - 10.017: 99.8935% ( 1) 00:13:02.711 10.463 - 10.518: 99.8997% ( 1) 00:13:02.711 10.574 - 10.630: 99.9060% ( 1) 00:13:02.711 11.075 - 11.130: 99.9123% ( 1) 00:13:02.711 11.798 - 11.854: 99.9185% ( 1) 00:13:02.711 82.365 - 82.810: 99.9248% ( 1) 00:13:02.711 3989.148 - 4017.642: 100.0000% ( 12) 00:13:02.711 00:13:02.711 Complete histogram 00:13:02.711 ================== 00:13:02.711 Range in us Cumulative Count 00:13:02.711 1.781 - [2024-12-09 05:07:38.999568] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:02.711 1.795: 0.0063% ( 1) 00:13:02.711 1.809 - 1.823: 0.0188% ( 2) 00:13:02.711 1.823 - 1.837: 0.1567% ( 22) 00:13:02.711 1.837 - 1.850: 0.7645% ( 97) 00:13:02.711 1.850 - 1.864: 2.8761% ( 337) 00:13:02.711 1.864 - 1.878: 12.3943% ( 1519) 00:13:02.711 1.878 - 1.892: 65.3299% ( 8448) 00:13:02.711 1.892 - 1.906: 87.1295% ( 3479) 00:13:02.711 1.906 - 1.920: 93.2327% ( 974) 00:13:02.711 1.920 - 1.934: 95.4822% ( 359) 00:13:02.711 1.934 - 1.948: 96.5035% ( 163) 00:13:02.711 1.948 - 1.962: 97.8570% ( 216) 00:13:02.711 1.962 - 1.976: 98.8596% ( 160) 00:13:02.711 1.976 - 1.990: 99.1666% ( 49) 00:13:02.711 1.990 - 2.003: 99.2731% ( 17) 00:13:02.711 2.003 - 2.017: 99.3107% ( 6) 00:13:02.711 2.045 - 2.059: 99.3295% ( 3) 00:13:02.711 2.379 - 2.393: 99.3358% ( 1) 00:13:02.712 4.007 - 4.035: 99.3546% ( 3) 00:13:02.712 4.397 - 4.424: 99.3609% ( 1) 00:13:02.712 4.563 - 4.591: 99.3671% ( 1) 00:13:02.712 4.842 - 4.870: 99.3734% ( 1) 00:13:02.712 5.009 - 5.037: 99.3797% ( 1) 00:13:02.712 5.120 - 5.148: 99.3859% ( 1) 00:13:02.712 5.510 - 5.537: 99.3922% ( 1) 00:13:02.712 5.760 - 5.788: 99.3985% ( 1) 00:13:02.712 5.955 - 5.983: 99.4047% ( 1) 00:13:02.712 6.066 - 6.094: 99.4110% ( 1) 00:13:02.712 6.317 - 6.344: 99.4173% ( 1) 00:13:02.712 6.567 - 6.595: 99.4235% ( 1) 00:13:02.712 6.762 - 6.790: 99.4298% ( 1) 00:13:02.712 6.790 - 6.817: 99.4361% ( 1) 00:13:02.712 7.179 - 7.235: 99.4423% ( 1) 00:13:02.712 7.290 - 7.346: 99.4486% ( 1) 00:13:02.712 7.569 - 7.624: 99.4549% ( 1) 00:13:02.712 7.847 - 7.903: 99.4611% ( 1) 00:13:02.712 8.125 - 8.181: 99.4674% ( 1) 00:13:02.712 8.737 - 8.793: 99.4737% ( 1) 00:13:02.712 11.242 - 11.297: 99.4799% ( 1) 00:13:02.712 15.471 - 15.583: 99.4862% ( 1) 00:13:02.712 1852.104 - 1866.351: 99.4924% ( 1) 00:13:02.712 3989.148 - 4017.642: 100.0000% ( 81) 00:13:02.712 00:13:02.712 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:02.712 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:02.712 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:02.712 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:02.712 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:02.712 [ 00:13:02.712 { 00:13:02.712 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:02.712 "subtype": "Discovery", 00:13:02.712 "listen_addresses": [], 00:13:02.712 "allow_any_host": true, 00:13:02.712 "hosts": [] 00:13:02.712 }, 00:13:02.712 { 00:13:02.712 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:02.712 "subtype": "NVMe", 00:13:02.712 "listen_addresses": [ 00:13:02.712 { 00:13:02.712 "trtype": "VFIOUSER", 00:13:02.712 "adrfam": "IPv4", 00:13:02.712 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:02.712 "trsvcid": "0" 00:13:02.712 } 00:13:02.712 ], 00:13:02.712 "allow_any_host": true, 00:13:02.712 "hosts": [], 00:13:02.712 "serial_number": "SPDK1", 00:13:02.712 "model_number": "SPDK bdev Controller", 00:13:02.712 "max_namespaces": 32, 00:13:02.712 "min_cntlid": 1, 00:13:02.712 "max_cntlid": 65519, 00:13:02.712 "namespaces": [ 00:13:02.712 { 00:13:02.712 "nsid": 1, 00:13:02.712 "bdev_name": "Malloc1", 00:13:02.712 "name": "Malloc1", 00:13:02.712 "nguid": "8E32825E592841CC946DEC0AAB4904B8", 00:13:02.712 "uuid": "8e32825e-5928-41cc-946d-ec0aab4904b8" 00:13:02.712 } 00:13:02.712 ] 00:13:02.712 }, 00:13:02.712 { 00:13:02.712 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:02.712 "subtype": "NVMe", 00:13:02.712 "listen_addresses": [ 00:13:02.712 { 00:13:02.712 "trtype": "VFIOUSER", 00:13:02.712 "adrfam": "IPv4", 00:13:02.712 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:02.712 "trsvcid": "0" 00:13:02.712 } 00:13:02.712 ], 00:13:02.712 "allow_any_host": true, 00:13:02.712 "hosts": [], 00:13:02.712 "serial_number": "SPDK2", 00:13:02.712 "model_number": "SPDK bdev Controller", 00:13:02.712 "max_namespaces": 32, 00:13:02.712 "min_cntlid": 1, 00:13:02.712 "max_cntlid": 65519, 00:13:02.712 "namespaces": [ 00:13:02.712 { 00:13:02.712 "nsid": 1, 00:13:02.712 "bdev_name": "Malloc2", 00:13:02.712 "name": "Malloc2", 00:13:02.712 "nguid": "F9582CB2F5CE430A9E3C6A576EE9404A", 00:13:02.712 "uuid": "f9582cb2-f5ce-430a-9e3c-6a576ee9404a" 00:13:02.712 } 00:13:02.712 ] 00:13:02.712 } 00:13:02.712 ] 00:13:02.712 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:02.712 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:02.712 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3545174 00:13:02.712 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:02.712 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:02.712 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:02.712 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:02.712 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:02.712 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:02.712 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:02.970 [2024-12-09 05:07:39.402772] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:02.970 Malloc3 00:13:02.970 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:03.228 [2024-12-09 05:07:39.642663] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:03.228 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:03.228 Asynchronous Event Request test 00:13:03.228 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:03.228 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:03.228 Registering asynchronous event callbacks... 00:13:03.228 Starting namespace attribute notice tests for all controllers... 00:13:03.228 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:03.228 aer_cb - Changed Namespace 00:13:03.228 Cleaning up... 00:13:03.228 [ 00:13:03.228 { 00:13:03.228 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:03.228 "subtype": "Discovery", 00:13:03.228 "listen_addresses": [], 00:13:03.228 "allow_any_host": true, 00:13:03.228 "hosts": [] 00:13:03.228 }, 00:13:03.228 { 00:13:03.228 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:03.228 "subtype": "NVMe", 00:13:03.228 "listen_addresses": [ 00:13:03.228 { 00:13:03.228 "trtype": "VFIOUSER", 00:13:03.228 "adrfam": "IPv4", 00:13:03.228 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:03.228 "trsvcid": "0" 00:13:03.228 } 00:13:03.228 ], 00:13:03.228 "allow_any_host": true, 00:13:03.228 "hosts": [], 00:13:03.228 "serial_number": "SPDK1", 00:13:03.228 "model_number": "SPDK bdev Controller", 00:13:03.228 "max_namespaces": 32, 00:13:03.228 "min_cntlid": 1, 00:13:03.228 "max_cntlid": 65519, 00:13:03.228 "namespaces": [ 00:13:03.228 { 00:13:03.228 "nsid": 1, 00:13:03.228 "bdev_name": "Malloc1", 00:13:03.228 "name": "Malloc1", 00:13:03.228 "nguid": "8E32825E592841CC946DEC0AAB4904B8", 00:13:03.228 "uuid": "8e32825e-5928-41cc-946d-ec0aab4904b8" 00:13:03.228 }, 00:13:03.228 { 00:13:03.228 "nsid": 2, 00:13:03.228 "bdev_name": "Malloc3", 00:13:03.228 "name": "Malloc3", 00:13:03.228 "nguid": "279C188B30DF41A29497504C0B6E9B11", 00:13:03.228 "uuid": "279c188b-30df-41a2-9497-504c0b6e9b11" 00:13:03.228 } 00:13:03.228 ] 00:13:03.228 }, 00:13:03.228 { 00:13:03.228 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:03.228 "subtype": "NVMe", 00:13:03.228 "listen_addresses": [ 00:13:03.228 { 00:13:03.228 "trtype": "VFIOUSER", 00:13:03.228 "adrfam": "IPv4", 00:13:03.228 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:03.228 "trsvcid": "0" 00:13:03.228 } 00:13:03.228 ], 00:13:03.228 "allow_any_host": true, 00:13:03.228 "hosts": [], 00:13:03.228 "serial_number": "SPDK2", 00:13:03.228 "model_number": "SPDK bdev Controller", 00:13:03.228 "max_namespaces": 32, 00:13:03.228 "min_cntlid": 1, 00:13:03.228 "max_cntlid": 65519, 00:13:03.228 "namespaces": [ 00:13:03.228 { 00:13:03.228 "nsid": 1, 00:13:03.228 "bdev_name": "Malloc2", 00:13:03.228 "name": "Malloc2", 00:13:03.228 "nguid": "F9582CB2F5CE430A9E3C6A576EE9404A", 00:13:03.228 "uuid": "f9582cb2-f5ce-430a-9e3c-6a576ee9404a" 00:13:03.228 } 00:13:03.228 ] 00:13:03.228 } 00:13:03.228 ] 00:13:03.228 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3545174 00:13:03.228 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:03.228 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:03.228 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:03.228 05:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:03.486 [2024-12-09 05:07:39.875887] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:13:03.486 [2024-12-09 05:07:39.875922] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3545404 ] 00:13:03.486 [2024-12-09 05:07:39.921720] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:03.487 [2024-12-09 05:07:39.923962] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:03.487 [2024-12-09 05:07:39.923988] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fae3a792000 00:13:03.487 [2024-12-09 05:07:39.924966] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:03.487 [2024-12-09 05:07:39.925974] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:03.487 [2024-12-09 05:07:39.926983] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:03.487 [2024-12-09 05:07:39.927986] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:03.487 [2024-12-09 05:07:39.928991] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:03.487 [2024-12-09 05:07:39.930011] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:03.487 [2024-12-09 05:07:39.931020] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:03.487 [2024-12-09 05:07:39.932030] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:03.487 [2024-12-09 05:07:39.933039] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:03.487 [2024-12-09 05:07:39.933049] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fae3a787000 00:13:03.487 [2024-12-09 05:07:39.934111] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:03.487 [2024-12-09 05:07:39.948462] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:03.487 [2024-12-09 05:07:39.948488] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:03.487 [2024-12-09 05:07:39.950564] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:03.487 [2024-12-09 05:07:39.950601] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:03.487 [2024-12-09 05:07:39.950676] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:03.487 [2024-12-09 05:07:39.950689] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:03.487 [2024-12-09 05:07:39.950694] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:03.487 [2024-12-09 05:07:39.952004] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:03.487 [2024-12-09 05:07:39.952015] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:03.487 [2024-12-09 05:07:39.952022] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:03.487 [2024-12-09 05:07:39.952567] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:03.487 [2024-12-09 05:07:39.952577] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:03.487 [2024-12-09 05:07:39.952584] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:03.487 [2024-12-09 05:07:39.953570] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:03.487 [2024-12-09 05:07:39.953579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:03.487 [2024-12-09 05:07:39.954584] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:03.487 [2024-12-09 05:07:39.954592] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:03.487 [2024-12-09 05:07:39.954597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:03.487 [2024-12-09 05:07:39.954603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:03.487 [2024-12-09 05:07:39.954708] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:03.487 [2024-12-09 05:07:39.954712] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:03.487 [2024-12-09 05:07:39.954717] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:03.487 [2024-12-09 05:07:39.955594] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:03.487 [2024-12-09 05:07:39.956601] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:03.487 [2024-12-09 05:07:39.957608] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:03.487 [2024-12-09 05:07:39.958617] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:03.487 [2024-12-09 05:07:39.958656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:03.487 [2024-12-09 05:07:39.959634] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:03.487 [2024-12-09 05:07:39.959643] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:03.487 [2024-12-09 05:07:39.959648] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:03.487 [2024-12-09 05:07:39.959665] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:03.487 [2024-12-09 05:07:39.959672] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:03.487 [2024-12-09 05:07:39.959685] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:03.487 [2024-12-09 05:07:39.959690] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:03.487 [2024-12-09 05:07:39.959693] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:03.487 [2024-12-09 05:07:39.959707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:03.487 [2024-12-09 05:07:39.970008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:03.487 [2024-12-09 05:07:39.970019] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:03.487 [2024-12-09 05:07:39.970024] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:03.487 [2024-12-09 05:07:39.970028] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:03.487 [2024-12-09 05:07:39.970032] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:03.487 [2024-12-09 05:07:39.970037] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:03.487 [2024-12-09 05:07:39.970041] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:03.487 [2024-12-09 05:07:39.970045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:03.488 [2024-12-09 05:07:39.970052] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:03.488 [2024-12-09 05:07:39.970063] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:03.488 [2024-12-09 05:07:39.978004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:03.488 [2024-12-09 05:07:39.978016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.488 [2024-12-09 05:07:39.978023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.488 [2024-12-09 05:07:39.978031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.488 [2024-12-09 05:07:39.978038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.488 [2024-12-09 05:07:39.978042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:03.488 [2024-12-09 05:07:39.978052] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:03.488 [2024-12-09 05:07:39.978061] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:03.488 [2024-12-09 05:07:39.986003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:03.488 [2024-12-09 05:07:39.986011] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:03.488 [2024-12-09 05:07:39.986016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:03.488 [2024-12-09 05:07:39.986024] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:03.488 [2024-12-09 05:07:39.986030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:03.488 [2024-12-09 05:07:39.986040] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:03.488 [2024-12-09 05:07:39.994003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:03.488 [2024-12-09 05:07:39.994062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:03.488 [2024-12-09 05:07:39.994069] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:03.488 [2024-12-09 05:07:39.994076] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:03.488 [2024-12-09 05:07:39.994081] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:03.488 [2024-12-09 05:07:39.994084] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:03.488 [2024-12-09 05:07:39.994090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:03.488 [2024-12-09 05:07:40.002006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:03.488 [2024-12-09 05:07:40.002020] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:03.488 [2024-12-09 05:07:40.002029] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:03.488 [2024-12-09 05:07:40.002036] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:03.488 [2024-12-09 05:07:40.002043] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:03.488 [2024-12-09 05:07:40.002047] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:03.488 [2024-12-09 05:07:40.002050] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:03.488 [2024-12-09 05:07:40.002056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:03.488 [2024-12-09 05:07:40.010008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:03.488 [2024-12-09 05:07:40.010034] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:03.488 [2024-12-09 05:07:40.010045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:03.488 [2024-12-09 05:07:40.010055] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:03.488 [2024-12-09 05:07:40.010060] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:03.488 [2024-12-09 05:07:40.010063] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:03.488 [2024-12-09 05:07:40.010070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:03.488 [2024-12-09 05:07:40.018003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:03.488 [2024-12-09 05:07:40.018018] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:03.488 [2024-12-09 05:07:40.018025] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:03.488 [2024-12-09 05:07:40.018032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:03.488 [2024-12-09 05:07:40.018041] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:03.488 [2024-12-09 05:07:40.018046] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:03.488 [2024-12-09 05:07:40.018051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:03.488 [2024-12-09 05:07:40.018056] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:03.488 [2024-12-09 05:07:40.018060] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:03.488 [2024-12-09 05:07:40.018065] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:03.488 [2024-12-09 05:07:40.018080] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:03.488 [2024-12-09 05:07:40.026015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:03.488 [2024-12-09 05:07:40.026040] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:03.488 [2024-12-09 05:07:40.034005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:03.488 [2024-12-09 05:07:40.034022] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:03.488 [2024-12-09 05:07:40.042002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:03.488 [2024-12-09 05:07:40.042015] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:03.488 [2024-12-09 05:07:40.050007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:03.489 [2024-12-09 05:07:40.050026] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:03.489 [2024-12-09 05:07:40.050031] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:03.489 [2024-12-09 05:07:40.050034] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:03.489 [2024-12-09 05:07:40.050037] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:03.489 [2024-12-09 05:07:40.050040] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:03.489 [2024-12-09 05:07:40.050046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:03.489 [2024-12-09 05:07:40.050054] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:03.489 [2024-12-09 05:07:40.050058] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:03.489 [2024-12-09 05:07:40.050061] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:03.489 [2024-12-09 05:07:40.050066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:03.489 [2024-12-09 05:07:40.050073] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:03.489 [2024-12-09 05:07:40.050077] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:03.489 [2024-12-09 05:07:40.050080] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:03.489 [2024-12-09 05:07:40.050088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:03.489 [2024-12-09 05:07:40.050095] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:03.489 [2024-12-09 05:07:40.050099] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:03.489 [2024-12-09 05:07:40.050102] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:03.489 [2024-12-09 05:07:40.050107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:03.489 [2024-12-09 05:07:40.058004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:03.489 [2024-12-09 05:07:40.058029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:03.489 [2024-12-09 05:07:40.058040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:03.489 [2024-12-09 05:07:40.058047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:03.489 ===================================================== 00:13:03.489 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:03.489 ===================================================== 00:13:03.489 Controller Capabilities/Features 00:13:03.489 ================================ 00:13:03.489 Vendor ID: 4e58 00:13:03.489 Subsystem Vendor ID: 4e58 00:13:03.489 Serial Number: SPDK2 00:13:03.489 Model Number: SPDK bdev Controller 00:13:03.489 Firmware Version: 25.01 00:13:03.489 Recommended Arb Burst: 6 00:13:03.489 IEEE OUI Identifier: 8d 6b 50 00:13:03.489 Multi-path I/O 00:13:03.489 May have multiple subsystem ports: Yes 00:13:03.489 May have multiple controllers: Yes 00:13:03.489 Associated with SR-IOV VF: No 00:13:03.489 Max Data Transfer Size: 131072 00:13:03.489 Max Number of Namespaces: 32 00:13:03.489 Max Number of I/O Queues: 127 00:13:03.489 NVMe Specification Version (VS): 1.3 00:13:03.489 NVMe Specification Version (Identify): 1.3 00:13:03.489 Maximum Queue Entries: 256 00:13:03.489 Contiguous Queues Required: Yes 00:13:03.489 Arbitration Mechanisms Supported 00:13:03.489 Weighted Round Robin: Not Supported 00:13:03.489 Vendor Specific: Not Supported 00:13:03.489 Reset Timeout: 15000 ms 00:13:03.489 Doorbell Stride: 4 bytes 00:13:03.489 NVM Subsystem Reset: Not Supported 00:13:03.489 Command Sets Supported 00:13:03.489 NVM Command Set: Supported 00:13:03.489 Boot Partition: Not Supported 00:13:03.489 Memory Page Size Minimum: 4096 bytes 00:13:03.489 Memory Page Size Maximum: 4096 bytes 00:13:03.489 Persistent Memory Region: Not Supported 00:13:03.489 Optional Asynchronous Events Supported 00:13:03.489 Namespace Attribute Notices: Supported 00:13:03.489 Firmware Activation Notices: Not Supported 00:13:03.489 ANA Change Notices: Not Supported 00:13:03.489 PLE Aggregate Log Change Notices: Not Supported 00:13:03.489 LBA Status Info Alert Notices: Not Supported 00:13:03.489 EGE Aggregate Log Change Notices: Not Supported 00:13:03.489 Normal NVM Subsystem Shutdown event: Not Supported 00:13:03.489 Zone Descriptor Change Notices: Not Supported 00:13:03.489 Discovery Log Change Notices: Not Supported 00:13:03.489 Controller Attributes 00:13:03.489 128-bit Host Identifier: Supported 00:13:03.489 Non-Operational Permissive Mode: Not Supported 00:13:03.489 NVM Sets: Not Supported 00:13:03.489 Read Recovery Levels: Not Supported 00:13:03.489 Endurance Groups: Not Supported 00:13:03.489 Predictable Latency Mode: Not Supported 00:13:03.489 Traffic Based Keep ALive: Not Supported 00:13:03.489 Namespace Granularity: Not Supported 00:13:03.489 SQ Associations: Not Supported 00:13:03.489 UUID List: Not Supported 00:13:03.489 Multi-Domain Subsystem: Not Supported 00:13:03.489 Fixed Capacity Management: Not Supported 00:13:03.489 Variable Capacity Management: Not Supported 00:13:03.489 Delete Endurance Group: Not Supported 00:13:03.489 Delete NVM Set: Not Supported 00:13:03.489 Extended LBA Formats Supported: Not Supported 00:13:03.489 Flexible Data Placement Supported: Not Supported 00:13:03.489 00:13:03.489 Controller Memory Buffer Support 00:13:03.489 ================================ 00:13:03.489 Supported: No 00:13:03.489 00:13:03.489 Persistent Memory Region Support 00:13:03.489 ================================ 00:13:03.489 Supported: No 00:13:03.489 00:13:03.489 Admin Command Set Attributes 00:13:03.489 ============================ 00:13:03.489 Security Send/Receive: Not Supported 00:13:03.489 Format NVM: Not Supported 00:13:03.489 Firmware Activate/Download: Not Supported 00:13:03.489 Namespace Management: Not Supported 00:13:03.489 Device Self-Test: Not Supported 00:13:03.490 Directives: Not Supported 00:13:03.490 NVMe-MI: Not Supported 00:13:03.490 Virtualization Management: Not Supported 00:13:03.490 Doorbell Buffer Config: Not Supported 00:13:03.490 Get LBA Status Capability: Not Supported 00:13:03.490 Command & Feature Lockdown Capability: Not Supported 00:13:03.490 Abort Command Limit: 4 00:13:03.490 Async Event Request Limit: 4 00:13:03.490 Number of Firmware Slots: N/A 00:13:03.490 Firmware Slot 1 Read-Only: N/A 00:13:03.490 Firmware Activation Without Reset: N/A 00:13:03.490 Multiple Update Detection Support: N/A 00:13:03.490 Firmware Update Granularity: No Information Provided 00:13:03.490 Per-Namespace SMART Log: No 00:13:03.490 Asymmetric Namespace Access Log Page: Not Supported 00:13:03.490 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:03.490 Command Effects Log Page: Supported 00:13:03.490 Get Log Page Extended Data: Supported 00:13:03.490 Telemetry Log Pages: Not Supported 00:13:03.490 Persistent Event Log Pages: Not Supported 00:13:03.490 Supported Log Pages Log Page: May Support 00:13:03.490 Commands Supported & Effects Log Page: Not Supported 00:13:03.490 Feature Identifiers & Effects Log Page:May Support 00:13:03.490 NVMe-MI Commands & Effects Log Page: May Support 00:13:03.490 Data Area 4 for Telemetry Log: Not Supported 00:13:03.490 Error Log Page Entries Supported: 128 00:13:03.490 Keep Alive: Supported 00:13:03.490 Keep Alive Granularity: 10000 ms 00:13:03.490 00:13:03.490 NVM Command Set Attributes 00:13:03.490 ========================== 00:13:03.490 Submission Queue Entry Size 00:13:03.490 Max: 64 00:13:03.490 Min: 64 00:13:03.490 Completion Queue Entry Size 00:13:03.490 Max: 16 00:13:03.490 Min: 16 00:13:03.490 Number of Namespaces: 32 00:13:03.490 Compare Command: Supported 00:13:03.490 Write Uncorrectable Command: Not Supported 00:13:03.490 Dataset Management Command: Supported 00:13:03.490 Write Zeroes Command: Supported 00:13:03.490 Set Features Save Field: Not Supported 00:13:03.490 Reservations: Not Supported 00:13:03.490 Timestamp: Not Supported 00:13:03.490 Copy: Supported 00:13:03.490 Volatile Write Cache: Present 00:13:03.490 Atomic Write Unit (Normal): 1 00:13:03.490 Atomic Write Unit (PFail): 1 00:13:03.490 Atomic Compare & Write Unit: 1 00:13:03.490 Fused Compare & Write: Supported 00:13:03.490 Scatter-Gather List 00:13:03.490 SGL Command Set: Supported (Dword aligned) 00:13:03.490 SGL Keyed: Not Supported 00:13:03.490 SGL Bit Bucket Descriptor: Not Supported 00:13:03.490 SGL Metadata Pointer: Not Supported 00:13:03.490 Oversized SGL: Not Supported 00:13:03.490 SGL Metadata Address: Not Supported 00:13:03.490 SGL Offset: Not Supported 00:13:03.490 Transport SGL Data Block: Not Supported 00:13:03.490 Replay Protected Memory Block: Not Supported 00:13:03.490 00:13:03.490 Firmware Slot Information 00:13:03.490 ========================= 00:13:03.490 Active slot: 1 00:13:03.490 Slot 1 Firmware Revision: 25.01 00:13:03.490 00:13:03.490 00:13:03.490 Commands Supported and Effects 00:13:03.490 ============================== 00:13:03.490 Admin Commands 00:13:03.490 -------------- 00:13:03.490 Get Log Page (02h): Supported 00:13:03.490 Identify (06h): Supported 00:13:03.490 Abort (08h): Supported 00:13:03.490 Set Features (09h): Supported 00:13:03.490 Get Features (0Ah): Supported 00:13:03.490 Asynchronous Event Request (0Ch): Supported 00:13:03.490 Keep Alive (18h): Supported 00:13:03.490 I/O Commands 00:13:03.490 ------------ 00:13:03.490 Flush (00h): Supported LBA-Change 00:13:03.490 Write (01h): Supported LBA-Change 00:13:03.490 Read (02h): Supported 00:13:03.490 Compare (05h): Supported 00:13:03.490 Write Zeroes (08h): Supported LBA-Change 00:13:03.490 Dataset Management (09h): Supported LBA-Change 00:13:03.490 Copy (19h): Supported LBA-Change 00:13:03.490 00:13:03.490 Error Log 00:13:03.490 ========= 00:13:03.490 00:13:03.490 Arbitration 00:13:03.490 =========== 00:13:03.490 Arbitration Burst: 1 00:13:03.490 00:13:03.490 Power Management 00:13:03.490 ================ 00:13:03.490 Number of Power States: 1 00:13:03.490 Current Power State: Power State #0 00:13:03.490 Power State #0: 00:13:03.490 Max Power: 0.00 W 00:13:03.490 Non-Operational State: Operational 00:13:03.490 Entry Latency: Not Reported 00:13:03.490 Exit Latency: Not Reported 00:13:03.490 Relative Read Throughput: 0 00:13:03.490 Relative Read Latency: 0 00:13:03.490 Relative Write Throughput: 0 00:13:03.490 Relative Write Latency: 0 00:13:03.490 Idle Power: Not Reported 00:13:03.490 Active Power: Not Reported 00:13:03.490 Non-Operational Permissive Mode: Not Supported 00:13:03.490 00:13:03.490 Health Information 00:13:03.490 ================== 00:13:03.490 Critical Warnings: 00:13:03.490 Available Spare Space: OK 00:13:03.490 Temperature: OK 00:13:03.490 Device Reliability: OK 00:13:03.490 Read Only: No 00:13:03.490 Volatile Memory Backup: OK 00:13:03.490 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:03.490 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:03.490 Available Spare: 0% 00:13:03.490 Available Sp[2024-12-09 05:07:40.058141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:03.490 [2024-12-09 05:07:40.066003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:03.490 [2024-12-09 05:07:40.066038] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:03.491 [2024-12-09 05:07:40.066048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.491 [2024-12-09 05:07:40.066055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.491 [2024-12-09 05:07:40.066060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.491 [2024-12-09 05:07:40.066066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.491 [2024-12-09 05:07:40.066126] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:03.491 [2024-12-09 05:07:40.066139] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:03.491 [2024-12-09 05:07:40.067130] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:03.491 [2024-12-09 05:07:40.067181] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:03.491 [2024-12-09 05:07:40.067187] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:03.491 [2024-12-09 05:07:40.068132] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:03.491 [2024-12-09 05:07:40.068144] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:03.491 [2024-12-09 05:07:40.068271] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:03.491 [2024-12-09 05:07:40.069384] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:03.758 are Threshold: 0% 00:13:03.758 Life Percentage Used: 0% 00:13:03.758 Data Units Read: 0 00:13:03.758 Data Units Written: 0 00:13:03.758 Host Read Commands: 0 00:13:03.758 Host Write Commands: 0 00:13:03.758 Controller Busy Time: 0 minutes 00:13:03.758 Power Cycles: 0 00:13:03.758 Power On Hours: 0 hours 00:13:03.758 Unsafe Shutdowns: 0 00:13:03.758 Unrecoverable Media Errors: 0 00:13:03.758 Lifetime Error Log Entries: 0 00:13:03.758 Warning Temperature Time: 0 minutes 00:13:03.758 Critical Temperature Time: 0 minutes 00:13:03.758 00:13:03.758 Number of Queues 00:13:03.758 ================ 00:13:03.758 Number of I/O Submission Queues: 127 00:13:03.758 Number of I/O Completion Queues: 127 00:13:03.758 00:13:03.758 Active Namespaces 00:13:03.758 ================= 00:13:03.758 Namespace ID:1 00:13:03.758 Error Recovery Timeout: Unlimited 00:13:03.758 Command Set Identifier: NVM (00h) 00:13:03.758 Deallocate: Supported 00:13:03.758 Deallocated/Unwritten Error: Not Supported 00:13:03.758 Deallocated Read Value: Unknown 00:13:03.758 Deallocate in Write Zeroes: Not Supported 00:13:03.758 Deallocated Guard Field: 0xFFFF 00:13:03.758 Flush: Supported 00:13:03.758 Reservation: Supported 00:13:03.758 Namespace Sharing Capabilities: Multiple Controllers 00:13:03.758 Size (in LBAs): 131072 (0GiB) 00:13:03.758 Capacity (in LBAs): 131072 (0GiB) 00:13:03.758 Utilization (in LBAs): 131072 (0GiB) 00:13:03.758 NGUID: F9582CB2F5CE430A9E3C6A576EE9404A 00:13:03.758 UUID: f9582cb2-f5ce-430a-9e3c-6a576ee9404a 00:13:03.758 Thin Provisioning: Not Supported 00:13:03.758 Per-NS Atomic Units: Yes 00:13:03.758 Atomic Boundary Size (Normal): 0 00:13:03.758 Atomic Boundary Size (PFail): 0 00:13:03.758 Atomic Boundary Offset: 0 00:13:03.758 Maximum Single Source Range Length: 65535 00:13:03.758 Maximum Copy Length: 65535 00:13:03.758 Maximum Source Range Count: 1 00:13:03.758 NGUID/EUI64 Never Reused: No 00:13:03.758 Namespace Write Protected: No 00:13:03.758 Number of LBA Formats: 1 00:13:03.758 Current LBA Format: LBA Format #00 00:13:03.758 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:03.758 00:13:03.758 05:07:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:03.759 [2024-12-09 05:07:40.380301] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:09.027 Initializing NVMe Controllers 00:13:09.027 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:09.027 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:09.027 Initialization complete. Launching workers. 00:13:09.027 ======================================================== 00:13:09.027 Latency(us) 00:13:09.027 Device Information : IOPS MiB/s Average min max 00:13:09.027 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39953.16 156.07 3203.58 987.78 8579.49 00:13:09.027 ======================================================== 00:13:09.027 Total : 39953.16 156.07 3203.58 987.78 8579.49 00:13:09.027 00:13:09.027 [2024-12-09 05:07:45.489267] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:09.027 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:09.285 [2024-12-09 05:07:45.809205] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:14.551 Initializing NVMe Controllers 00:13:14.551 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:14.551 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:14.551 Initialization complete. Launching workers. 00:13:14.551 ======================================================== 00:13:14.551 Latency(us) 00:13:14.551 Device Information : IOPS MiB/s Average min max 00:13:14.551 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39829.49 155.58 3213.54 989.58 8472.60 00:13:14.551 ======================================================== 00:13:14.551 Total : 39829.49 155.58 3213.54 989.58 8472.60 00:13:14.551 00:13:14.551 [2024-12-09 05:07:50.827308] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:14.551 05:07:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:14.551 [2024-12-09 05:07:51.122412] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:19.821 [2024-12-09 05:07:56.262086] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:19.821 Initializing NVMe Controllers 00:13:19.821 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:19.821 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:19.821 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:19.822 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:19.822 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:19.822 Initialization complete. Launching workers. 00:13:19.822 Starting thread on core 2 00:13:19.822 Starting thread on core 3 00:13:19.822 Starting thread on core 1 00:13:19.822 05:07:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:20.079 [2024-12-09 05:07:56.639529] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:23.365 [2024-12-09 05:07:59.708304] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:23.365 Initializing NVMe Controllers 00:13:23.365 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:23.365 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:23.365 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:23.365 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:23.365 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:23.365 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:23.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:23.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:23.365 Initialization complete. Launching workers. 00:13:23.365 Starting thread on core 1 with urgent priority queue 00:13:23.365 Starting thread on core 2 with urgent priority queue 00:13:23.365 Starting thread on core 3 with urgent priority queue 00:13:23.365 Starting thread on core 0 with urgent priority queue 00:13:23.365 SPDK bdev Controller (SPDK2 ) core 0: 8288.00 IO/s 12.07 secs/100000 ios 00:13:23.365 SPDK bdev Controller (SPDK2 ) core 1: 8093.33 IO/s 12.36 secs/100000 ios 00:13:23.365 SPDK bdev Controller (SPDK2 ) core 2: 9080.67 IO/s 11.01 secs/100000 ios 00:13:23.365 SPDK bdev Controller (SPDK2 ) core 3: 8509.00 IO/s 11.75 secs/100000 ios 00:13:23.365 ======================================================== 00:13:23.365 00:13:23.365 05:07:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:23.624 [2024-12-09 05:08:00.083452] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:23.624 Initializing NVMe Controllers 00:13:23.624 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:23.624 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:23.624 Namespace ID: 1 size: 0GB 00:13:23.624 Initialization complete. 00:13:23.624 INFO: using host memory buffer for IO 00:13:23.624 Hello world! 00:13:23.624 [2024-12-09 05:08:00.093517] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:23.624 05:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:23.882 [2024-12-09 05:08:00.466950] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:25.261 Initializing NVMe Controllers 00:13:25.261 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:25.261 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:25.261 Initialization complete. Launching workers. 00:13:25.261 submit (in ns) avg, min, max = 6013.0, 3265.2, 3999100.9 00:13:25.261 complete (in ns) avg, min, max = 21591.8, 1813.0, 3999249.6 00:13:25.261 00:13:25.261 Submit histogram 00:13:25.261 ================ 00:13:25.261 Range in us Cumulative Count 00:13:25.261 3.256 - 3.270: 0.0063% ( 1) 00:13:25.261 3.270 - 3.283: 0.0696% ( 10) 00:13:25.261 3.283 - 3.297: 0.2909% ( 35) 00:13:25.261 3.297 - 3.311: 0.7969% ( 80) 00:13:25.261 3.311 - 3.325: 1.3850% ( 93) 00:13:25.261 3.325 - 3.339: 3.1811% ( 284) 00:13:25.261 3.339 - 3.353: 7.3931% ( 666) 00:13:25.261 3.353 - 3.367: 13.0724% ( 898) 00:13:25.261 3.367 - 3.381: 18.5745% ( 870) 00:13:25.261 3.381 - 3.395: 25.3162% ( 1066) 00:13:25.261 3.395 - 3.409: 31.7038% ( 1010) 00:13:25.261 3.409 - 3.423: 36.3774% ( 739) 00:13:25.261 3.423 - 3.437: 41.9428% ( 880) 00:13:25.261 3.437 - 3.450: 47.0971% ( 815) 00:13:25.261 3.450 - 3.464: 50.9360% ( 607) 00:13:25.261 3.464 - 3.478: 54.9583% ( 636) 00:13:25.261 3.478 - 3.492: 61.4660% ( 1029) 00:13:25.261 3.492 - 3.506: 68.0243% ( 1037) 00:13:25.261 3.506 - 3.520: 72.2426% ( 667) 00:13:25.261 3.520 - 3.534: 76.2016% ( 626) 00:13:25.261 3.534 - 3.548: 81.1156% ( 777) 00:13:25.261 3.548 - 3.562: 84.2841% ( 501) 00:13:25.261 3.562 - 3.590: 87.2945% ( 476) 00:13:25.261 3.590 - 3.617: 88.0724% ( 123) 00:13:25.261 3.617 - 3.645: 89.0020% ( 147) 00:13:25.261 3.645 - 3.673: 90.6337% ( 258) 00:13:25.261 3.673 - 3.701: 92.2717% ( 259) 00:13:25.261 3.701 - 3.729: 93.7516% ( 234) 00:13:25.261 3.729 - 3.757: 95.5730% ( 288) 00:13:25.261 3.757 - 3.784: 97.0402% ( 232) 00:13:25.261 3.784 - 3.812: 98.1976% ( 183) 00:13:25.261 3.812 - 3.840: 98.8616% ( 105) 00:13:25.261 3.840 - 3.868: 99.3170% ( 72) 00:13:25.261 3.868 - 3.896: 99.5763% ( 41) 00:13:25.261 3.896 - 3.923: 99.6585% ( 13) 00:13:25.261 3.923 - 3.951: 99.7091% ( 8) 00:13:25.261 5.231 - 5.259: 99.7154% ( 1) 00:13:25.261 5.454 - 5.482: 99.7217% ( 1) 00:13:25.261 5.482 - 5.510: 99.7281% ( 1) 00:13:25.261 5.510 - 5.537: 99.7344% ( 1) 00:13:25.261 5.565 - 5.593: 99.7407% ( 1) 00:13:25.261 5.649 - 5.677: 99.7470% ( 1) 00:13:25.261 5.760 - 5.788: 99.7534% ( 1) 00:13:25.261 5.843 - 5.871: 99.7660% ( 2) 00:13:25.261 5.899 - 5.927: 99.7786% ( 2) 00:13:25.261 5.927 - 5.955: 99.7850% ( 1) 00:13:25.261 6.066 - 6.094: 99.7913% ( 1) 00:13:25.261 6.122 - 6.150: 99.7976% ( 1) 00:13:25.261 6.150 - 6.177: 99.8039% ( 1) 00:13:25.261 6.261 - 6.289: 99.8103% ( 1) 00:13:25.261 6.289 - 6.317: 99.8166% ( 1) 00:13:25.261 6.428 - 6.456: 99.8229% ( 1) 00:13:25.261 6.456 - 6.483: 99.8292% ( 1) 00:13:25.261 6.511 - 6.539: 99.8356% ( 1) 00:13:25.261 6.762 - 6.790: 99.8419% ( 1) 00:13:25.261 6.817 - 6.845: 99.8482% ( 1) 00:13:25.261 6.957 - 6.984: 99.8545% ( 1) 00:13:25.261 7.123 - 7.179: 99.8609% ( 1) 00:13:25.261 7.235 - 7.290: 99.8672% ( 1) 00:13:25.261 7.346 - 7.402: 99.8735% ( 1) 00:13:25.261 7.457 - 7.513: 99.8798% ( 1) 00:13:25.261 7.624 - 7.680: 99.8862% ( 1) 00:13:25.261 7.736 - 7.791: 99.8988% ( 2) 00:13:25.261 7.847 - 7.903: 99.9115% ( 2) 00:13:25.261 7.958 - 8.014: 99.9178% ( 1) 00:13:25.261 9.071 - 9.127: 99.9241% ( 1) 00:13:25.261 9.962 - 10.017: 99.9304% ( 1) 00:13:25.261 10.351 - 10.407: 99.9368% ( 1) 00:13:25.261 3989.148 - 4017.642: 100.0000% ( 10) 00:13:25.261 00:13:25.261 Complete histogram 00:13:25.261 ================== 00:13:25.261 Range in us Cumulative Count 00:13:25.261 1.809 - 1.823: 0.0632% ( 10) 00:13:25.261 1.823 - 1.837: 0.7146% ( 103) 00:13:25.261 1.837 - 1.850: 2.3716% ( 262) 00:13:25.261 1.850 - 1.864: 3.9337% ( 247) 00:13:25.261 1.864 - 1.878: 35.5869% ( 5005) 00:13:25.261 1.878 - 1.892: 85.1758% ( 7841) 00:13:25.261 1.892 - 1.906: 93.6314% ( 1337) 00:13:25.261 1.906 - 1.920: 96.5469% ( 461) 00:13:25.261 1.920 - [2024-12-09 05:08:01.558056] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:25.261 1.934: 97.2300% ( 108) 00:13:25.261 1.934 - 1.948: 98.0395% ( 128) 00:13:25.261 1.948 - 1.962: 98.8743% ( 132) 00:13:25.261 1.962 - 1.976: 99.3043% ( 68) 00:13:25.261 1.976 - 1.990: 99.3549% ( 8) 00:13:25.261 1.990 - 2.003: 99.3676% ( 2) 00:13:25.261 2.003 - 2.017: 99.3739% ( 1) 00:13:25.261 2.031 - 2.045: 99.3865% ( 2) 00:13:25.261 2.045 - 2.059: 99.3929% ( 1) 00:13:25.261 2.101 - 2.115: 99.3992% ( 1) 00:13:25.261 3.868 - 3.896: 99.4055% ( 1) 00:13:25.261 4.035 - 4.063: 99.4118% ( 1) 00:13:25.261 4.063 - 4.090: 99.4182% ( 1) 00:13:25.261 4.202 - 4.230: 99.4245% ( 1) 00:13:25.261 4.313 - 4.341: 99.4308% ( 1) 00:13:25.261 4.536 - 4.563: 99.4371% ( 1) 00:13:25.261 4.591 - 4.619: 99.4435% ( 1) 00:13:25.261 5.037 - 5.064: 99.4561% ( 2) 00:13:25.261 5.704 - 5.732: 99.4624% ( 1) 00:13:25.261 6.066 - 6.094: 99.4688% ( 1) 00:13:25.261 6.344 - 6.372: 99.4751% ( 1) 00:13:25.261 6.400 - 6.428: 99.4814% ( 1) 00:13:25.261 6.456 - 6.483: 99.4877% ( 1) 00:13:25.261 6.567 - 6.595: 99.4941% ( 1) 00:13:25.261 6.762 - 6.790: 99.5004% ( 1) 00:13:25.261 10.463 - 10.518: 99.5067% ( 1) 00:13:25.261 3989.148 - 4017.642: 100.0000% ( 78) 00:13:25.261 00:13:25.261 05:08:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:25.262 05:08:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:25.262 05:08:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:25.262 05:08:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:25.262 05:08:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:25.262 [ 00:13:25.262 { 00:13:25.262 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:25.262 "subtype": "Discovery", 00:13:25.262 "listen_addresses": [], 00:13:25.262 "allow_any_host": true, 00:13:25.262 "hosts": [] 00:13:25.262 }, 00:13:25.262 { 00:13:25.262 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:25.262 "subtype": "NVMe", 00:13:25.262 "listen_addresses": [ 00:13:25.262 { 00:13:25.262 "trtype": "VFIOUSER", 00:13:25.262 "adrfam": "IPv4", 00:13:25.262 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:25.262 "trsvcid": "0" 00:13:25.262 } 00:13:25.262 ], 00:13:25.262 "allow_any_host": true, 00:13:25.262 "hosts": [], 00:13:25.262 "serial_number": "SPDK1", 00:13:25.262 "model_number": "SPDK bdev Controller", 00:13:25.262 "max_namespaces": 32, 00:13:25.262 "min_cntlid": 1, 00:13:25.262 "max_cntlid": 65519, 00:13:25.262 "namespaces": [ 00:13:25.262 { 00:13:25.262 "nsid": 1, 00:13:25.262 "bdev_name": "Malloc1", 00:13:25.262 "name": "Malloc1", 00:13:25.262 "nguid": "8E32825E592841CC946DEC0AAB4904B8", 00:13:25.262 "uuid": "8e32825e-5928-41cc-946d-ec0aab4904b8" 00:13:25.262 }, 00:13:25.262 { 00:13:25.262 "nsid": 2, 00:13:25.262 "bdev_name": "Malloc3", 00:13:25.262 "name": "Malloc3", 00:13:25.262 "nguid": "279C188B30DF41A29497504C0B6E9B11", 00:13:25.262 "uuid": "279c188b-30df-41a2-9497-504c0b6e9b11" 00:13:25.262 } 00:13:25.262 ] 00:13:25.262 }, 00:13:25.262 { 00:13:25.262 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:25.262 "subtype": "NVMe", 00:13:25.262 "listen_addresses": [ 00:13:25.262 { 00:13:25.262 "trtype": "VFIOUSER", 00:13:25.262 "adrfam": "IPv4", 00:13:25.262 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:25.262 "trsvcid": "0" 00:13:25.262 } 00:13:25.262 ], 00:13:25.262 "allow_any_host": true, 00:13:25.262 "hosts": [], 00:13:25.262 "serial_number": "SPDK2", 00:13:25.262 "model_number": "SPDK bdev Controller", 00:13:25.262 "max_namespaces": 32, 00:13:25.262 "min_cntlid": 1, 00:13:25.262 "max_cntlid": 65519, 00:13:25.262 "namespaces": [ 00:13:25.262 { 00:13:25.262 "nsid": 1, 00:13:25.262 "bdev_name": "Malloc2", 00:13:25.262 "name": "Malloc2", 00:13:25.262 "nguid": "F9582CB2F5CE430A9E3C6A576EE9404A", 00:13:25.262 "uuid": "f9582cb2-f5ce-430a-9e3c-6a576ee9404a" 00:13:25.262 } 00:13:25.262 ] 00:13:25.262 } 00:13:25.262 ] 00:13:25.262 05:08:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:25.262 05:08:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:25.262 05:08:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3548865 00:13:25.262 05:08:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:25.262 05:08:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:25.262 05:08:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:25.262 05:08:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:25.262 05:08:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:25.262 05:08:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:25.262 05:08:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:25.521 [2024-12-09 05:08:01.962439] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:25.521 Malloc4 00:13:25.521 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:25.780 [2024-12-09 05:08:02.207278] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:25.780 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:25.780 Asynchronous Event Request test 00:13:25.780 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:25.780 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:25.780 Registering asynchronous event callbacks... 00:13:25.780 Starting namespace attribute notice tests for all controllers... 00:13:25.780 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:25.780 aer_cb - Changed Namespace 00:13:25.780 Cleaning up... 00:13:25.780 [ 00:13:25.780 { 00:13:25.780 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:25.780 "subtype": "Discovery", 00:13:25.780 "listen_addresses": [], 00:13:25.780 "allow_any_host": true, 00:13:25.780 "hosts": [] 00:13:25.780 }, 00:13:25.780 { 00:13:25.780 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:25.780 "subtype": "NVMe", 00:13:25.780 "listen_addresses": [ 00:13:25.780 { 00:13:25.780 "trtype": "VFIOUSER", 00:13:25.780 "adrfam": "IPv4", 00:13:25.780 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:25.780 "trsvcid": "0" 00:13:25.780 } 00:13:25.780 ], 00:13:25.780 "allow_any_host": true, 00:13:25.780 "hosts": [], 00:13:25.780 "serial_number": "SPDK1", 00:13:25.780 "model_number": "SPDK bdev Controller", 00:13:25.780 "max_namespaces": 32, 00:13:25.780 "min_cntlid": 1, 00:13:25.780 "max_cntlid": 65519, 00:13:25.780 "namespaces": [ 00:13:25.780 { 00:13:25.780 "nsid": 1, 00:13:25.780 "bdev_name": "Malloc1", 00:13:25.780 "name": "Malloc1", 00:13:25.780 "nguid": "8E32825E592841CC946DEC0AAB4904B8", 00:13:25.780 "uuid": "8e32825e-5928-41cc-946d-ec0aab4904b8" 00:13:25.780 }, 00:13:25.780 { 00:13:25.780 "nsid": 2, 00:13:25.780 "bdev_name": "Malloc3", 00:13:25.780 "name": "Malloc3", 00:13:25.780 "nguid": "279C188B30DF41A29497504C0B6E9B11", 00:13:25.780 "uuid": "279c188b-30df-41a2-9497-504c0b6e9b11" 00:13:25.780 } 00:13:25.780 ] 00:13:25.780 }, 00:13:25.780 { 00:13:25.780 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:25.780 "subtype": "NVMe", 00:13:25.780 "listen_addresses": [ 00:13:25.780 { 00:13:25.780 "trtype": "VFIOUSER", 00:13:25.780 "adrfam": "IPv4", 00:13:25.780 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:25.780 "trsvcid": "0" 00:13:25.780 } 00:13:25.780 ], 00:13:25.780 "allow_any_host": true, 00:13:25.780 "hosts": [], 00:13:25.780 "serial_number": "SPDK2", 00:13:25.780 "model_number": "SPDK bdev Controller", 00:13:25.780 "max_namespaces": 32, 00:13:25.780 "min_cntlid": 1, 00:13:25.780 "max_cntlid": 65519, 00:13:25.780 "namespaces": [ 00:13:25.780 { 00:13:25.780 "nsid": 1, 00:13:25.780 "bdev_name": "Malloc2", 00:13:25.780 "name": "Malloc2", 00:13:25.780 "nguid": "F9582CB2F5CE430A9E3C6A576EE9404A", 00:13:25.780 "uuid": "f9582cb2-f5ce-430a-9e3c-6a576ee9404a" 00:13:25.780 }, 00:13:25.780 { 00:13:25.780 "nsid": 2, 00:13:25.780 "bdev_name": "Malloc4", 00:13:25.780 "name": "Malloc4", 00:13:25.780 "nguid": "D68F90847C51456083C0B045FC59CB9E", 00:13:25.780 "uuid": "d68f9084-7c51-4560-83c0-b045fc59cb9e" 00:13:25.780 } 00:13:25.780 ] 00:13:25.780 } 00:13:25.780 ] 00:13:26.040 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3548865 00:13:26.040 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:26.040 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3541017 00:13:26.040 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3541017 ']' 00:13:26.040 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3541017 00:13:26.040 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:26.040 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.040 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3541017 00:13:26.040 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:26.040 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:26.040 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3541017' 00:13:26.040 killing process with pid 3541017 00:13:26.040 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3541017 00:13:26.040 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3541017 00:13:26.299 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:26.299 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:26.300 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:26.300 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:26.300 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:26.300 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3549097 00:13:26.300 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3549097' 00:13:26.300 Process pid: 3549097 00:13:26.300 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:26.300 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:26.300 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3549097 00:13:26.300 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3549097 ']' 00:13:26.300 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.300 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:26.300 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.300 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:26.300 05:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:26.300 [2024-12-09 05:08:02.817867] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:26.300 [2024-12-09 05:08:02.818750] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:13:26.300 [2024-12-09 05:08:02.818790] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.300 [2024-12-09 05:08:02.882172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:26.300 [2024-12-09 05:08:02.919500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.300 [2024-12-09 05:08:02.919541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.300 [2024-12-09 05:08:02.919549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.300 [2024-12-09 05:08:02.919555] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.300 [2024-12-09 05:08:02.919559] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.300 [2024-12-09 05:08:02.921145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.300 [2024-12-09 05:08:02.921244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.300 [2024-12-09 05:08:02.921309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:26.300 [2024-12-09 05:08:02.921310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.559 [2024-12-09 05:08:02.990006] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:26.559 [2024-12-09 05:08:02.990084] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:26.559 [2024-12-09 05:08:02.990230] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:26.559 [2024-12-09 05:08:02.990496] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:26.559 [2024-12-09 05:08:02.990676] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:26.559 05:08:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:26.559 05:08:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:26.559 05:08:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:27.496 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:27.754 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:27.754 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:27.754 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:27.754 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:27.754 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:28.013 Malloc1 00:13:28.013 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:28.013 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:28.271 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:28.537 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:28.537 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:28.537 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:28.798 Malloc2 00:13:28.798 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:29.057 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:29.057 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:29.315 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:29.315 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3549097 00:13:29.315 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3549097 ']' 00:13:29.315 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3549097 00:13:29.315 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:29.315 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.315 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3549097 00:13:29.315 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:29.315 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:29.315 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3549097' 00:13:29.315 killing process with pid 3549097 00:13:29.315 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3549097 00:13:29.315 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3549097 00:13:29.590 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:29.590 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:29.590 00:13:29.590 real 0m52.091s 00:13:29.590 user 3m21.637s 00:13:29.590 sys 0m3.159s 00:13:29.590 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.590 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:29.590 ************************************ 00:13:29.590 END TEST nvmf_vfio_user 00:13:29.590 ************************************ 00:13:29.590 05:08:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:29.590 05:08:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:29.590 05:08:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.590 05:08:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:29.590 ************************************ 00:13:29.590 START TEST nvmf_vfio_user_nvme_compliance 00:13:29.590 ************************************ 00:13:29.590 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:29.906 * Looking for test storage... 00:13:29.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:29.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.906 --rc genhtml_branch_coverage=1 00:13:29.906 --rc genhtml_function_coverage=1 00:13:29.906 --rc genhtml_legend=1 00:13:29.906 --rc geninfo_all_blocks=1 00:13:29.906 --rc geninfo_unexecuted_blocks=1 00:13:29.906 00:13:29.906 ' 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:29.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.906 --rc genhtml_branch_coverage=1 00:13:29.906 --rc genhtml_function_coverage=1 00:13:29.906 --rc genhtml_legend=1 00:13:29.906 --rc geninfo_all_blocks=1 00:13:29.906 --rc geninfo_unexecuted_blocks=1 00:13:29.906 00:13:29.906 ' 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:29.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.906 --rc genhtml_branch_coverage=1 00:13:29.906 --rc genhtml_function_coverage=1 00:13:29.906 --rc genhtml_legend=1 00:13:29.906 --rc geninfo_all_blocks=1 00:13:29.906 --rc geninfo_unexecuted_blocks=1 00:13:29.906 00:13:29.906 ' 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:29.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.906 --rc genhtml_branch_coverage=1 00:13:29.906 --rc genhtml_function_coverage=1 00:13:29.906 --rc genhtml_legend=1 00:13:29.906 --rc geninfo_all_blocks=1 00:13:29.906 --rc geninfo_unexecuted_blocks=1 00:13:29.906 00:13:29.906 ' 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:29.906 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:29.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3549859 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3549859' 00:13:29.907 Process pid: 3549859 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3549859 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3549859 ']' 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.907 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:29.907 [2024-12-09 05:08:06.437305] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:13:29.907 [2024-12-09 05:08:06.437354] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.907 [2024-12-09 05:08:06.499860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:29.907 [2024-12-09 05:08:06.540107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.907 [2024-12-09 05:08:06.540144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.907 [2024-12-09 05:08:06.540151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.907 [2024-12-09 05:08:06.540157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.907 [2024-12-09 05:08:06.540162] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.907 [2024-12-09 05:08:06.541468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.907 [2024-12-09 05:08:06.541565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.907 [2024-12-09 05:08:06.541567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.165 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.165 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:13:30.165 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:31.101 malloc0 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.101 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:31.360 00:13:31.360 00:13:31.360 CUnit - A unit testing framework for C - Version 2.1-3 00:13:31.360 http://cunit.sourceforge.net/ 00:13:31.360 00:13:31.360 00:13:31.360 Suite: nvme_compliance 00:13:31.360 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-09 05:08:07.901536] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:31.360 [2024-12-09 05:08:07.902891] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:31.360 [2024-12-09 05:08:07.902906] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:31.360 [2024-12-09 05:08:07.902913] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:31.360 [2024-12-09 05:08:07.904560] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:31.360 passed 00:13:31.360 Test: admin_identify_ctrlr_verify_fused ...[2024-12-09 05:08:07.985132] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:31.360 [2024-12-09 05:08:07.988154] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:31.618 passed 00:13:31.618 Test: admin_identify_ns ...[2024-12-09 05:08:08.073104] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:31.618 [2024-12-09 05:08:08.132007] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:31.618 [2024-12-09 05:08:08.140010] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:31.618 [2024-12-09 05:08:08.161112] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:31.618 passed 00:13:31.618 Test: admin_get_features_mandatory_features ...[2024-12-09 05:08:08.241693] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:31.618 [2024-12-09 05:08:08.244711] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:31.890 passed 00:13:31.890 Test: admin_get_features_optional_features ...[2024-12-09 05:08:08.325266] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:31.890 [2024-12-09 05:08:08.328284] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:31.890 passed 00:13:31.891 Test: admin_set_features_number_of_queues ...[2024-12-09 05:08:08.408129] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:31.891 [2024-12-09 05:08:08.513086] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.149 passed 00:13:32.149 Test: admin_get_log_page_mandatory_logs ...[2024-12-09 05:08:08.593368] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.149 [2024-12-09 05:08:08.596393] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.149 passed 00:13:32.149 Test: admin_get_log_page_with_lpo ...[2024-12-09 05:08:08.677298] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.149 [2024-12-09 05:08:08.745010] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:32.149 [2024-12-09 05:08:08.757171] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.149 passed 00:13:32.408 Test: fabric_property_get ...[2024-12-09 05:08:08.836714] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.408 [2024-12-09 05:08:08.837963] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:32.408 [2024-12-09 05:08:08.839740] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.408 passed 00:13:32.408 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-09 05:08:08.920279] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.408 [2024-12-09 05:08:08.921520] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:32.408 [2024-12-09 05:08:08.924303] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.408 passed 00:13:32.408 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-09 05:08:09.006241] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.667 [2024-12-09 05:08:09.091006] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:32.667 [2024-12-09 05:08:09.107013] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:32.667 [2024-12-09 05:08:09.112084] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.667 passed 00:13:32.667 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-09 05:08:09.189646] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.667 [2024-12-09 05:08:09.190886] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:32.667 [2024-12-09 05:08:09.192666] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.667 passed 00:13:32.667 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-09 05:08:09.275273] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.927 [2024-12-09 05:08:09.352003] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:32.927 [2024-12-09 05:08:09.376004] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:32.927 [2024-12-09 05:08:09.381083] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.927 passed 00:13:32.927 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-09 05:08:09.458651] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.927 [2024-12-09 05:08:09.459890] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:32.927 [2024-12-09 05:08:09.459914] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:32.927 [2024-12-09 05:08:09.463686] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.927 passed 00:13:32.927 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-09 05:08:09.541541] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.187 [2024-12-09 05:08:09.637006] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:33.187 [2024-12-09 05:08:09.645005] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:33.187 [2024-12-09 05:08:09.653011] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:33.187 [2024-12-09 05:08:09.661002] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:33.187 [2024-12-09 05:08:09.690098] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.187 passed 00:13:33.187 Test: admin_create_io_sq_verify_pc ...[2024-12-09 05:08:09.767618] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.187 [2024-12-09 05:08:09.784014] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:33.187 [2024-12-09 05:08:09.801817] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.447 passed 00:13:33.447 Test: admin_create_io_qp_max_qps ...[2024-12-09 05:08:09.882372] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.384 [2024-12-09 05:08:10.976008] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:13:34.950 [2024-12-09 05:08:11.352964] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.950 passed 00:13:34.950 Test: admin_create_io_sq_shared_cq ...[2024-12-09 05:08:11.433121] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.950 [2024-12-09 05:08:11.565009] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:35.209 [2024-12-09 05:08:11.602069] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:35.209 passed 00:13:35.209 00:13:35.209 Run Summary: Type Total Ran Passed Failed Inactive 00:13:35.209 suites 1 1 n/a 0 0 00:13:35.209 tests 18 18 18 0 0 00:13:35.209 asserts 360 360 360 0 n/a 00:13:35.209 00:13:35.209 Elapsed time = 1.528 seconds 00:13:35.209 05:08:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3549859 00:13:35.209 05:08:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3549859 ']' 00:13:35.209 05:08:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3549859 00:13:35.209 05:08:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:13:35.209 05:08:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:35.209 05:08:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3549859 00:13:35.209 05:08:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:35.209 05:08:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:35.209 05:08:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3549859' 00:13:35.209 killing process with pid 3549859 00:13:35.209 05:08:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3549859 00:13:35.210 05:08:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3549859 00:13:35.469 05:08:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:35.469 05:08:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:35.469 00:13:35.469 real 0m5.730s 00:13:35.469 user 0m16.034s 00:13:35.469 sys 0m0.494s 00:13:35.469 05:08:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.469 05:08:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:35.469 ************************************ 00:13:35.469 END TEST nvmf_vfio_user_nvme_compliance 00:13:35.469 ************************************ 00:13:35.469 05:08:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:35.469 05:08:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:35.469 05:08:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.469 05:08:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:35.469 ************************************ 00:13:35.469 START TEST nvmf_vfio_user_fuzz 00:13:35.469 ************************************ 00:13:35.469 05:08:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:35.469 * Looking for test storage... 00:13:35.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.469 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:35.469 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:13:35.469 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:35.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.729 --rc genhtml_branch_coverage=1 00:13:35.729 --rc genhtml_function_coverage=1 00:13:35.729 --rc genhtml_legend=1 00:13:35.729 --rc geninfo_all_blocks=1 00:13:35.729 --rc geninfo_unexecuted_blocks=1 00:13:35.729 00:13:35.729 ' 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:35.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.729 --rc genhtml_branch_coverage=1 00:13:35.729 --rc genhtml_function_coverage=1 00:13:35.729 --rc genhtml_legend=1 00:13:35.729 --rc geninfo_all_blocks=1 00:13:35.729 --rc geninfo_unexecuted_blocks=1 00:13:35.729 00:13:35.729 ' 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:35.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.729 --rc genhtml_branch_coverage=1 00:13:35.729 --rc genhtml_function_coverage=1 00:13:35.729 --rc genhtml_legend=1 00:13:35.729 --rc geninfo_all_blocks=1 00:13:35.729 --rc geninfo_unexecuted_blocks=1 00:13:35.729 00:13:35.729 ' 00:13:35.729 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:35.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.729 --rc genhtml_branch_coverage=1 00:13:35.729 --rc genhtml_function_coverage=1 00:13:35.729 --rc genhtml_legend=1 00:13:35.729 --rc geninfo_all_blocks=1 00:13:35.729 --rc geninfo_unexecuted_blocks=1 00:13:35.729 00:13:35.729 ' 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:35.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3550846 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3550846' 00:13:35.730 Process pid: 3550846 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3550846 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3550846 ']' 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:35.730 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:35.989 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.989 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:13:35.989 05:08:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:36.924 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:36.924 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.924 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:36.924 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.924 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:36.925 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:36.925 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.925 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:36.925 malloc0 00:13:36.925 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.925 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:36.925 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.925 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:36.925 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.925 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:36.925 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.925 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:36.925 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.925 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:36.925 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.925 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:36.925 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.925 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:36.925 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:09.067 Fuzzing completed. Shutting down the fuzz application 00:14:09.067 00:14:09.067 Dumping successful admin opcodes: 00:14:09.067 9, 10, 00:14:09.067 Dumping successful io opcodes: 00:14:09.067 0, 00:14:09.067 NS: 0x20000081ef00 I/O qp, Total commands completed: 990925, total successful commands: 3880, random_seed: 574808960 00:14:09.067 NS: 0x20000081ef00 admin qp, Total commands completed: 245456, total successful commands: 57, random_seed: 3787005568 00:14:09.067 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:09.067 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.067 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:09.067 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.067 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3550846 00:14:09.067 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3550846 ']' 00:14:09.067 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3550846 00:14:09.067 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:14:09.067 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.067 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3550846 00:14:09.067 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:09.067 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:09.067 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3550846' 00:14:09.067 killing process with pid 3550846 00:14:09.067 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3550846 00:14:09.067 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3550846 00:14:09.067 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:09.067 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:09.067 00:14:09.067 real 0m32.254s 00:14:09.067 user 0m29.845s 00:14:09.067 sys 0m31.058s 00:14:09.067 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.067 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:09.067 ************************************ 00:14:09.067 END TEST nvmf_vfio_user_fuzz 00:14:09.067 ************************************ 00:14:09.067 05:08:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:09.067 05:08:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:09.067 05:08:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.067 05:08:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:09.067 ************************************ 00:14:09.067 START TEST nvmf_auth_target 00:14:09.067 ************************************ 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:09.068 * Looking for test storage... 00:14:09.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:09.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.068 --rc genhtml_branch_coverage=1 00:14:09.068 --rc genhtml_function_coverage=1 00:14:09.068 --rc genhtml_legend=1 00:14:09.068 --rc geninfo_all_blocks=1 00:14:09.068 --rc geninfo_unexecuted_blocks=1 00:14:09.068 00:14:09.068 ' 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:09.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.068 --rc genhtml_branch_coverage=1 00:14:09.068 --rc genhtml_function_coverage=1 00:14:09.068 --rc genhtml_legend=1 00:14:09.068 --rc geninfo_all_blocks=1 00:14:09.068 --rc geninfo_unexecuted_blocks=1 00:14:09.068 00:14:09.068 ' 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:09.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.068 --rc genhtml_branch_coverage=1 00:14:09.068 --rc genhtml_function_coverage=1 00:14:09.068 --rc genhtml_legend=1 00:14:09.068 --rc geninfo_all_blocks=1 00:14:09.068 --rc geninfo_unexecuted_blocks=1 00:14:09.068 00:14:09.068 ' 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:09.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.068 --rc genhtml_branch_coverage=1 00:14:09.068 --rc genhtml_function_coverage=1 00:14:09.068 --rc genhtml_legend=1 00:14:09.068 --rc geninfo_all_blocks=1 00:14:09.068 --rc geninfo_unexecuted_blocks=1 00:14:09.068 00:14:09.068 ' 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:09.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:09.068 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:09.069 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:09.069 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:09.069 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:09.069 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:09.069 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.069 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:09.069 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:09.069 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:09.069 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.069 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.069 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.069 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:09.069 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:09.069 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:09.069 05:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.295 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:13.295 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:13.295 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:13.295 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:13.295 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:13.295 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:13.295 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:13.295 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:13.296 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:13.296 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:13.296 Found net devices under 0000:86:00.0: cvl_0_0 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:13.296 Found net devices under 0000:86:00.1: cvl_0_1 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:13.296 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:13.556 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:13.556 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:13.556 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:13.556 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:13.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:14:13.556 00:14:13.556 --- 10.0.0.2 ping statistics --- 00:14:13.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.556 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:14:13.556 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:13.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:14:13.556 00:14:13.556 --- 10.0.0.1 ping statistics --- 00:14:13.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.556 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:14:13.556 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.556 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:13.556 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:13.556 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.556 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:13.556 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:13.556 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.556 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:13.556 05:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:13.556 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:13.556 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:13.556 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:13.556 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.556 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3559149 00:14:13.556 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3559149 00:14:13.556 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3559149 ']' 00:14:13.556 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:13.556 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.556 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.556 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.556 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.556 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3559169 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2b90b1eeb73ebc0da0d76d69f70045e49f4b147aaa387c79 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.peW 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2b90b1eeb73ebc0da0d76d69f70045e49f4b147aaa387c79 0 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2b90b1eeb73ebc0da0d76d69f70045e49f4b147aaa387c79 0 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2b90b1eeb73ebc0da0d76d69f70045e49f4b147aaa387c79 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.peW 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.peW 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.peW 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=10057a2586569bf95c39e14f2386535428b459c2a0db014f8e8630060872018c 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.H9N 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 10057a2586569bf95c39e14f2386535428b459c2a0db014f8e8630060872018c 3 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 10057a2586569bf95c39e14f2386535428b459c2a0db014f8e8630060872018c 3 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=10057a2586569bf95c39e14f2386535428b459c2a0db014f8e8630060872018c 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.H9N 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.H9N 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.H9N 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:13.816 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fcf01a248a910b7d5b908035d9879f0d 00:14:14.075 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:14.075 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.CIy 00:14:14.075 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fcf01a248a910b7d5b908035d9879f0d 1 00:14:14.075 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fcf01a248a910b7d5b908035d9879f0d 1 00:14:14.075 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:14.075 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:14.075 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fcf01a248a910b7d5b908035d9879f0d 00:14:14.075 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.CIy 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.CIy 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.CIy 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=84ad52055cb1675253d87e89087deab65e63947bcef7cd22 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.fM0 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 84ad52055cb1675253d87e89087deab65e63947bcef7cd22 2 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 84ad52055cb1675253d87e89087deab65e63947bcef7cd22 2 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=84ad52055cb1675253d87e89087deab65e63947bcef7cd22 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.fM0 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.fM0 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.fM0 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ce9eb3710e501bc9ea3795562508c3676dd3f5055d2409a4 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.DtF 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ce9eb3710e501bc9ea3795562508c3676dd3f5055d2409a4 2 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ce9eb3710e501bc9ea3795562508c3676dd3f5055d2409a4 2 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ce9eb3710e501bc9ea3795562508c3676dd3f5055d2409a4 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.DtF 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.DtF 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.DtF 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=203496f66605b810867ab242cac0faa8 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.qJ6 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 203496f66605b810867ab242cac0faa8 1 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 203496f66605b810867ab242cac0faa8 1 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=203496f66605b810867ab242cac0faa8 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.qJ6 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.qJ6 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.qJ6 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=80e0ecfa9364c77ca936a1a6559c9bbafc9c360a3987ca7826e413a76042f980 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Udt 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 80e0ecfa9364c77ca936a1a6559c9bbafc9c360a3987ca7826e413a76042f980 3 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 80e0ecfa9364c77ca936a1a6559c9bbafc9c360a3987ca7826e413a76042f980 3 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=80e0ecfa9364c77ca936a1a6559c9bbafc9c360a3987ca7826e413a76042f980 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:14.076 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:14.335 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Udt 00:14:14.335 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Udt 00:14:14.335 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Udt 00:14:14.335 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:14.335 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3559149 00:14:14.335 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3559149 ']' 00:14:14.335 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.335 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.335 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.335 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.335 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.335 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.335 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:14.335 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3559169 /var/tmp/host.sock 00:14:14.335 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3559169 ']' 00:14:14.335 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:14.335 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.335 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:14.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:14.335 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.336 05:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.595 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.595 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:14.595 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:14.595 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.595 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.595 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.595 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:14.595 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.peW 00:14:14.595 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.595 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.595 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.595 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.peW 00:14:14.595 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.peW 00:14:14.854 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.H9N ]] 00:14:14.854 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.H9N 00:14:14.854 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.854 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.854 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.854 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.H9N 00:14:14.854 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.H9N 00:14:15.114 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:15.114 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.CIy 00:14:15.114 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.114 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.114 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.114 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.CIy 00:14:15.114 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.CIy 00:14:15.373 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.fM0 ]] 00:14:15.373 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fM0 00:14:15.373 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.373 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.373 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.373 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fM0 00:14:15.373 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fM0 00:14:15.373 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:15.373 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.DtF 00:14:15.373 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.373 05:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.373 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.373 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.DtF 00:14:15.373 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.DtF 00:14:15.632 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.qJ6 ]] 00:14:15.632 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qJ6 00:14:15.632 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.632 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.632 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.632 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qJ6 00:14:15.632 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qJ6 00:14:15.892 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:15.892 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Udt 00:14:15.892 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.892 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.892 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.892 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Udt 00:14:15.892 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Udt 00:14:16.150 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:16.150 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:16.150 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:16.151 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.151 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:16.151 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:16.410 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:16.410 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.410 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:16.410 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:16.410 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:16.410 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.410 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.410 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.410 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.410 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.410 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.410 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.410 05:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.410 00:14:16.669 05:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.669 05:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.670 05:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.670 05:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.670 05:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.670 05:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.670 05:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.670 05:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.670 05:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.670 { 00:14:16.670 "cntlid": 1, 00:14:16.670 "qid": 0, 00:14:16.670 "state": "enabled", 00:14:16.670 "thread": "nvmf_tgt_poll_group_000", 00:14:16.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:16.670 "listen_address": { 00:14:16.670 "trtype": "TCP", 00:14:16.670 "adrfam": "IPv4", 00:14:16.670 "traddr": "10.0.0.2", 00:14:16.670 "trsvcid": "4420" 00:14:16.670 }, 00:14:16.670 "peer_address": { 00:14:16.670 "trtype": "TCP", 00:14:16.670 "adrfam": "IPv4", 00:14:16.670 "traddr": "10.0.0.1", 00:14:16.670 "trsvcid": "58966" 00:14:16.670 }, 00:14:16.670 "auth": { 00:14:16.670 "state": "completed", 00:14:16.670 "digest": "sha256", 00:14:16.670 "dhgroup": "null" 00:14:16.670 } 00:14:16.670 } 00:14:16.670 ]' 00:14:16.670 05:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.670 05:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:16.670 05:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.928 05:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:16.929 05:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.929 05:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.929 05:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.929 05:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.187 05:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:14:17.187 05:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:14:17.754 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.754 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:17.754 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.754 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.754 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.754 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.754 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:17.754 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:17.754 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:17.754 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.754 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:17.754 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:17.754 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:17.754 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.754 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.754 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.754 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.754 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.754 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.012 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.012 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.012 00:14:18.012 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.270 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.271 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.271 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.271 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.271 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.271 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.271 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.271 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:18.271 { 00:14:18.271 "cntlid": 3, 00:14:18.271 "qid": 0, 00:14:18.271 "state": "enabled", 00:14:18.271 "thread": "nvmf_tgt_poll_group_000", 00:14:18.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:18.271 "listen_address": { 00:14:18.271 "trtype": "TCP", 00:14:18.271 "adrfam": "IPv4", 00:14:18.271 "traddr": "10.0.0.2", 00:14:18.271 "trsvcid": "4420" 00:14:18.271 }, 00:14:18.271 "peer_address": { 00:14:18.271 "trtype": "TCP", 00:14:18.271 "adrfam": "IPv4", 00:14:18.271 "traddr": "10.0.0.1", 00:14:18.271 "trsvcid": "58998" 00:14:18.271 }, 00:14:18.271 "auth": { 00:14:18.271 "state": "completed", 00:14:18.271 "digest": "sha256", 00:14:18.271 "dhgroup": "null" 00:14:18.271 } 00:14:18.271 } 00:14:18.271 ]' 00:14:18.271 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.271 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.271 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.529 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:18.529 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.529 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.529 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.529 05:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.787 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:14:18.787 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:14:19.355 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.355 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:19.355 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.355 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.355 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.355 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:19.355 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:19.355 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:19.355 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:19.355 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:19.355 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:19.355 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:19.355 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:19.355 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.355 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.355 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.355 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.355 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.355 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.355 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.355 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.613 00:14:19.613 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.613 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.613 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.873 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.873 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.873 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.873 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.873 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.873 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.873 { 00:14:19.873 "cntlid": 5, 00:14:19.873 "qid": 0, 00:14:19.873 "state": "enabled", 00:14:19.873 "thread": "nvmf_tgt_poll_group_000", 00:14:19.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:19.873 "listen_address": { 00:14:19.873 "trtype": "TCP", 00:14:19.873 "adrfam": "IPv4", 00:14:19.873 "traddr": "10.0.0.2", 00:14:19.873 "trsvcid": "4420" 00:14:19.873 }, 00:14:19.873 "peer_address": { 00:14:19.873 "trtype": "TCP", 00:14:19.873 "adrfam": "IPv4", 00:14:19.873 "traddr": "10.0.0.1", 00:14:19.873 "trsvcid": "60310" 00:14:19.873 }, 00:14:19.873 "auth": { 00:14:19.873 "state": "completed", 00:14:19.873 "digest": "sha256", 00:14:19.873 "dhgroup": "null" 00:14:19.873 } 00:14:19.873 } 00:14:19.873 ]' 00:14:19.873 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.873 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:19.873 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:20.132 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:20.132 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:20.132 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.132 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.132 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.132 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:14:20.132 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:14:20.696 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.696 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:20.696 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.696 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.955 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.955 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.955 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:20.955 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:20.955 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:20.955 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.955 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:20.955 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:20.955 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:20.955 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.955 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:14:20.955 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.955 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.955 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.955 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:20.955 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:20.955 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.214 00:14:21.214 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.214 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.214 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.473 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.473 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.473 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.473 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.473 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.473 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.473 { 00:14:21.473 "cntlid": 7, 00:14:21.473 "qid": 0, 00:14:21.473 "state": "enabled", 00:14:21.473 "thread": "nvmf_tgt_poll_group_000", 00:14:21.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:21.473 "listen_address": { 00:14:21.473 "trtype": "TCP", 00:14:21.473 "adrfam": "IPv4", 00:14:21.473 "traddr": "10.0.0.2", 00:14:21.473 "trsvcid": "4420" 00:14:21.473 }, 00:14:21.473 "peer_address": { 00:14:21.473 "trtype": "TCP", 00:14:21.473 "adrfam": "IPv4", 00:14:21.473 "traddr": "10.0.0.1", 00:14:21.473 "trsvcid": "60328" 00:14:21.473 }, 00:14:21.473 "auth": { 00:14:21.473 "state": "completed", 00:14:21.473 "digest": "sha256", 00:14:21.473 "dhgroup": "null" 00:14:21.473 } 00:14:21.473 } 00:14:21.473 ]' 00:14:21.473 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.473 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:21.473 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.473 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:21.473 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.732 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.732 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.732 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.732 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:14:21.732 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:14:22.300 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.301 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:22.301 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.301 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.301 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.301 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:22.301 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.301 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:22.301 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:22.560 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:22.560 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.560 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:22.560 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:22.560 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:22.560 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.560 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.560 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.560 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.560 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.560 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.560 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.560 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.819 00:14:22.819 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:22.819 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:22.819 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.078 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.078 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.078 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.078 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.078 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.078 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.078 { 00:14:23.078 "cntlid": 9, 00:14:23.078 "qid": 0, 00:14:23.078 "state": "enabled", 00:14:23.078 "thread": "nvmf_tgt_poll_group_000", 00:14:23.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:23.078 "listen_address": { 00:14:23.078 "trtype": "TCP", 00:14:23.078 "adrfam": "IPv4", 00:14:23.078 "traddr": "10.0.0.2", 00:14:23.078 "trsvcid": "4420" 00:14:23.078 }, 00:14:23.078 "peer_address": { 00:14:23.078 "trtype": "TCP", 00:14:23.078 "adrfam": "IPv4", 00:14:23.078 "traddr": "10.0.0.1", 00:14:23.078 "trsvcid": "60364" 00:14:23.078 }, 00:14:23.078 "auth": { 00:14:23.078 "state": "completed", 00:14:23.078 "digest": "sha256", 00:14:23.078 "dhgroup": "ffdhe2048" 00:14:23.078 } 00:14:23.078 } 00:14:23.078 ]' 00:14:23.078 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.078 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:23.078 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.078 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:23.078 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.078 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.078 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.078 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.337 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:14:23.337 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:14:23.906 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.906 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:23.906 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.906 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.906 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.906 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.906 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:23.906 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:24.165 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:24.165 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:24.165 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:24.165 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:24.165 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:24.165 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.165 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.165 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.165 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.165 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.165 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.165 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.165 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.424 00:14:24.424 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.424 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.424 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.683 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.684 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.684 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.684 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.684 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.684 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.684 { 00:14:24.684 "cntlid": 11, 00:14:24.684 "qid": 0, 00:14:24.684 "state": "enabled", 00:14:24.684 "thread": "nvmf_tgt_poll_group_000", 00:14:24.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:24.684 "listen_address": { 00:14:24.684 "trtype": "TCP", 00:14:24.684 "adrfam": "IPv4", 00:14:24.684 "traddr": "10.0.0.2", 00:14:24.684 "trsvcid": "4420" 00:14:24.684 }, 00:14:24.684 "peer_address": { 00:14:24.684 "trtype": "TCP", 00:14:24.684 "adrfam": "IPv4", 00:14:24.684 "traddr": "10.0.0.1", 00:14:24.684 "trsvcid": "60390" 00:14:24.684 }, 00:14:24.684 "auth": { 00:14:24.684 "state": "completed", 00:14:24.684 "digest": "sha256", 00:14:24.684 "dhgroup": "ffdhe2048" 00:14:24.684 } 00:14:24.684 } 00:14:24.684 ]' 00:14:24.684 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.684 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.684 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.684 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:24.684 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.684 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.684 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.684 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.942 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:14:24.942 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:14:25.509 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.509 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:25.509 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.509 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.509 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.509 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.509 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:25.509 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:25.768 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:25.769 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.769 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:25.769 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:25.769 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:25.769 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.769 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.769 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.769 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.769 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.769 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.769 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.769 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.028 00:14:26.028 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.028 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.028 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.287 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.287 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.287 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.287 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.287 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.287 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.287 { 00:14:26.287 "cntlid": 13, 00:14:26.287 "qid": 0, 00:14:26.287 "state": "enabled", 00:14:26.287 "thread": "nvmf_tgt_poll_group_000", 00:14:26.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:26.287 "listen_address": { 00:14:26.287 "trtype": "TCP", 00:14:26.287 "adrfam": "IPv4", 00:14:26.287 "traddr": "10.0.0.2", 00:14:26.287 "trsvcid": "4420" 00:14:26.287 }, 00:14:26.287 "peer_address": { 00:14:26.287 "trtype": "TCP", 00:14:26.287 "adrfam": "IPv4", 00:14:26.287 "traddr": "10.0.0.1", 00:14:26.287 "trsvcid": "60412" 00:14:26.287 }, 00:14:26.287 "auth": { 00:14:26.287 "state": "completed", 00:14:26.287 "digest": "sha256", 00:14:26.287 "dhgroup": "ffdhe2048" 00:14:26.287 } 00:14:26.287 } 00:14:26.287 ]' 00:14:26.287 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.287 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.287 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.287 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:26.287 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.287 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.287 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.287 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.545 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:14:26.545 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:14:27.112 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.112 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:27.112 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.112 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.112 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.112 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.112 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:27.112 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:27.371 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:27.371 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.371 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:27.371 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:27.371 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:27.371 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.371 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:14:27.371 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.371 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.371 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.371 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:27.371 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:27.371 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:27.631 00:14:27.631 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:27.631 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.631 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.890 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.890 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.890 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.890 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.890 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.890 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.890 { 00:14:27.890 "cntlid": 15, 00:14:27.890 "qid": 0, 00:14:27.890 "state": "enabled", 00:14:27.890 "thread": "nvmf_tgt_poll_group_000", 00:14:27.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:27.890 "listen_address": { 00:14:27.890 "trtype": "TCP", 00:14:27.890 "adrfam": "IPv4", 00:14:27.890 "traddr": "10.0.0.2", 00:14:27.890 "trsvcid": "4420" 00:14:27.890 }, 00:14:27.890 "peer_address": { 00:14:27.890 "trtype": "TCP", 00:14:27.890 "adrfam": "IPv4", 00:14:27.890 "traddr": "10.0.0.1", 00:14:27.890 "trsvcid": "60446" 00:14:27.890 }, 00:14:27.890 "auth": { 00:14:27.890 "state": "completed", 00:14:27.890 "digest": "sha256", 00:14:27.890 "dhgroup": "ffdhe2048" 00:14:27.890 } 00:14:27.890 } 00:14:27.890 ]' 00:14:27.890 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:27.890 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:27.890 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.890 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:27.890 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.890 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.890 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.890 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.148 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:14:28.148 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:14:28.714 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.714 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:28.714 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.714 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.714 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.714 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:28.714 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:28.714 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:28.714 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:28.972 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:28.972 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.972 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:28.972 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:28.972 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:28.972 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.972 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.972 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.972 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.972 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.972 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.972 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.972 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.231 00:14:29.231 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.231 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.231 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.490 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.490 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.490 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.490 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.490 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.490 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.490 { 00:14:29.490 "cntlid": 17, 00:14:29.490 "qid": 0, 00:14:29.490 "state": "enabled", 00:14:29.490 "thread": "nvmf_tgt_poll_group_000", 00:14:29.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:29.490 "listen_address": { 00:14:29.490 "trtype": "TCP", 00:14:29.490 "adrfam": "IPv4", 00:14:29.490 "traddr": "10.0.0.2", 00:14:29.490 "trsvcid": "4420" 00:14:29.490 }, 00:14:29.490 "peer_address": { 00:14:29.490 "trtype": "TCP", 00:14:29.490 "adrfam": "IPv4", 00:14:29.490 "traddr": "10.0.0.1", 00:14:29.490 "trsvcid": "60474" 00:14:29.490 }, 00:14:29.490 "auth": { 00:14:29.490 "state": "completed", 00:14:29.490 "digest": "sha256", 00:14:29.490 "dhgroup": "ffdhe3072" 00:14:29.490 } 00:14:29.490 } 00:14:29.490 ]' 00:14:29.490 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.490 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:29.490 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.490 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:29.490 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.490 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.490 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.490 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.749 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:14:29.749 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:14:30.347 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.347 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:30.347 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.347 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.347 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.348 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.348 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:30.348 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:30.606 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:30.606 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.606 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:30.606 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:30.606 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:30.606 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.606 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.606 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.606 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.606 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.606 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.606 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.606 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.864 00:14:30.864 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.864 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.864 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:31.122 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.122 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.122 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.122 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.122 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.122 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.122 { 00:14:31.122 "cntlid": 19, 00:14:31.122 "qid": 0, 00:14:31.122 "state": "enabled", 00:14:31.122 "thread": "nvmf_tgt_poll_group_000", 00:14:31.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:31.122 "listen_address": { 00:14:31.122 "trtype": "TCP", 00:14:31.122 "adrfam": "IPv4", 00:14:31.122 "traddr": "10.0.0.2", 00:14:31.122 "trsvcid": "4420" 00:14:31.122 }, 00:14:31.122 "peer_address": { 00:14:31.122 "trtype": "TCP", 00:14:31.122 "adrfam": "IPv4", 00:14:31.122 "traddr": "10.0.0.1", 00:14:31.122 "trsvcid": "50952" 00:14:31.122 }, 00:14:31.122 "auth": { 00:14:31.122 "state": "completed", 00:14:31.122 "digest": "sha256", 00:14:31.122 "dhgroup": "ffdhe3072" 00:14:31.122 } 00:14:31.122 } 00:14:31.122 ]' 00:14:31.122 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.122 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:31.122 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.123 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:31.123 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.123 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.123 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.123 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.380 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:14:31.380 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:14:31.946 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.946 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:31.946 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.946 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.946 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.946 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.946 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:31.946 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:32.205 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:32.205 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.205 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:32.205 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:32.205 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:32.205 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.205 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.205 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.205 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.205 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.205 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.205 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.205 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.464 00:14:32.464 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.464 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.464 05:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.724 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.724 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.724 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.724 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.724 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.724 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.724 { 00:14:32.724 "cntlid": 21, 00:14:32.724 "qid": 0, 00:14:32.724 "state": "enabled", 00:14:32.724 "thread": "nvmf_tgt_poll_group_000", 00:14:32.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:32.724 "listen_address": { 00:14:32.724 "trtype": "TCP", 00:14:32.724 "adrfam": "IPv4", 00:14:32.724 "traddr": "10.0.0.2", 00:14:32.724 "trsvcid": "4420" 00:14:32.724 }, 00:14:32.724 "peer_address": { 00:14:32.724 "trtype": "TCP", 00:14:32.724 "adrfam": "IPv4", 00:14:32.724 "traddr": "10.0.0.1", 00:14:32.724 "trsvcid": "50974" 00:14:32.724 }, 00:14:32.724 "auth": { 00:14:32.724 "state": "completed", 00:14:32.724 "digest": "sha256", 00:14:32.724 "dhgroup": "ffdhe3072" 00:14:32.724 } 00:14:32.724 } 00:14:32.724 ]' 00:14:32.724 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.724 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.724 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.724 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:32.724 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.724 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.724 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.724 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.983 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:14:32.983 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:14:33.552 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.552 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:33.552 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.552 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.552 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.552 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.552 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:33.552 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:33.812 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:33.812 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.812 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:33.812 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:33.812 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:33.812 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.812 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:14:33.812 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.812 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.812 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.812 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:33.812 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:33.812 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:34.071 00:14:34.071 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.071 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.071 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.331 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.331 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.331 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.331 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.331 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.331 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.331 { 00:14:34.331 "cntlid": 23, 00:14:34.331 "qid": 0, 00:14:34.331 "state": "enabled", 00:14:34.331 "thread": "nvmf_tgt_poll_group_000", 00:14:34.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:34.331 "listen_address": { 00:14:34.331 "trtype": "TCP", 00:14:34.331 "adrfam": "IPv4", 00:14:34.331 "traddr": "10.0.0.2", 00:14:34.331 "trsvcid": "4420" 00:14:34.331 }, 00:14:34.331 "peer_address": { 00:14:34.331 "trtype": "TCP", 00:14:34.331 "adrfam": "IPv4", 00:14:34.331 "traddr": "10.0.0.1", 00:14:34.331 "trsvcid": "51006" 00:14:34.331 }, 00:14:34.331 "auth": { 00:14:34.331 "state": "completed", 00:14:34.331 "digest": "sha256", 00:14:34.331 "dhgroup": "ffdhe3072" 00:14:34.331 } 00:14:34.331 } 00:14:34.331 ]' 00:14:34.331 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.331 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.331 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.331 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:34.331 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.331 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.331 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.331 05:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.591 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:14:34.591 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:14:35.159 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.159 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:35.159 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.159 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.159 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.159 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:35.159 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:35.159 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:35.159 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:35.420 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:35.420 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.420 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:35.420 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:35.420 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:35.420 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.420 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.420 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.420 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.420 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.420 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.420 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.420 05:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.679 00:14:35.679 05:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.679 05:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.679 05:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.938 05:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.938 05:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.938 05:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.938 05:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.938 05:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.938 05:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.938 { 00:14:35.938 "cntlid": 25, 00:14:35.938 "qid": 0, 00:14:35.938 "state": "enabled", 00:14:35.938 "thread": "nvmf_tgt_poll_group_000", 00:14:35.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:35.938 "listen_address": { 00:14:35.938 "trtype": "TCP", 00:14:35.938 "adrfam": "IPv4", 00:14:35.938 "traddr": "10.0.0.2", 00:14:35.938 "trsvcid": "4420" 00:14:35.938 }, 00:14:35.938 "peer_address": { 00:14:35.938 "trtype": "TCP", 00:14:35.938 "adrfam": "IPv4", 00:14:35.938 "traddr": "10.0.0.1", 00:14:35.938 "trsvcid": "51046" 00:14:35.938 }, 00:14:35.938 "auth": { 00:14:35.938 "state": "completed", 00:14:35.938 "digest": "sha256", 00:14:35.938 "dhgroup": "ffdhe4096" 00:14:35.938 } 00:14:35.938 } 00:14:35.938 ]' 00:14:35.938 05:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.938 05:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.938 05:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.938 05:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:35.938 05:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.938 05:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.938 05:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.938 05:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.197 05:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:14:36.197 05:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:14:36.765 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.765 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:36.765 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.765 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.765 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.765 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.765 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:36.765 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:37.024 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:37.024 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.024 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:37.024 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:37.024 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:37.024 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.024 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.024 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.024 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.024 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.024 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.024 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.024 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.283 00:14:37.283 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.283 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.283 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.542 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.542 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.542 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.542 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.542 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.542 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.542 { 00:14:37.542 "cntlid": 27, 00:14:37.542 "qid": 0, 00:14:37.542 "state": "enabled", 00:14:37.542 "thread": "nvmf_tgt_poll_group_000", 00:14:37.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:37.542 "listen_address": { 00:14:37.542 "trtype": "TCP", 00:14:37.542 "adrfam": "IPv4", 00:14:37.542 "traddr": "10.0.0.2", 00:14:37.542 "trsvcid": "4420" 00:14:37.542 }, 00:14:37.542 "peer_address": { 00:14:37.542 "trtype": "TCP", 00:14:37.542 "adrfam": "IPv4", 00:14:37.542 "traddr": "10.0.0.1", 00:14:37.542 "trsvcid": "51080" 00:14:37.542 }, 00:14:37.542 "auth": { 00:14:37.542 "state": "completed", 00:14:37.542 "digest": "sha256", 00:14:37.542 "dhgroup": "ffdhe4096" 00:14:37.542 } 00:14:37.542 } 00:14:37.542 ]' 00:14:37.542 05:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.542 05:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.542 05:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.542 05:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:37.542 05:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.542 05:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.542 05:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.542 05:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.802 05:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:14:37.802 05:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:14:38.371 05:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.371 05:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:38.371 05:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.371 05:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.371 05:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.371 05:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.371 05:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:38.371 05:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:38.628 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:38.628 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.628 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:38.628 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:38.628 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:38.628 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.628 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.628 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.628 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.628 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.628 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.628 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.628 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.886 00:14:38.886 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:38.886 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:38.886 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.145 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.145 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.145 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.145 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.145 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.145 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.145 { 00:14:39.145 "cntlid": 29, 00:14:39.145 "qid": 0, 00:14:39.145 "state": "enabled", 00:14:39.145 "thread": "nvmf_tgt_poll_group_000", 00:14:39.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:39.146 "listen_address": { 00:14:39.146 "trtype": "TCP", 00:14:39.146 "adrfam": "IPv4", 00:14:39.146 "traddr": "10.0.0.2", 00:14:39.146 "trsvcid": "4420" 00:14:39.146 }, 00:14:39.146 "peer_address": { 00:14:39.146 "trtype": "TCP", 00:14:39.146 "adrfam": "IPv4", 00:14:39.146 "traddr": "10.0.0.1", 00:14:39.146 "trsvcid": "51100" 00:14:39.146 }, 00:14:39.146 "auth": { 00:14:39.146 "state": "completed", 00:14:39.146 "digest": "sha256", 00:14:39.146 "dhgroup": "ffdhe4096" 00:14:39.146 } 00:14:39.146 } 00:14:39.146 ]' 00:14:39.146 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.146 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.146 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.146 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:39.146 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.146 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.146 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.146 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.404 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:14:39.405 05:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:14:39.972 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.972 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:39.972 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.972 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.972 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.972 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.972 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:39.972 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:40.230 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:40.230 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.230 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:40.230 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:40.230 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:40.230 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.230 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:14:40.230 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.230 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.230 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.230 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:40.230 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:40.230 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:40.489 00:14:40.489 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:40.489 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.489 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:40.747 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.747 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.747 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.747 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.747 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.747 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.747 { 00:14:40.747 "cntlid": 31, 00:14:40.747 "qid": 0, 00:14:40.747 "state": "enabled", 00:14:40.747 "thread": "nvmf_tgt_poll_group_000", 00:14:40.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:40.747 "listen_address": { 00:14:40.747 "trtype": "TCP", 00:14:40.747 "adrfam": "IPv4", 00:14:40.747 "traddr": "10.0.0.2", 00:14:40.747 "trsvcid": "4420" 00:14:40.747 }, 00:14:40.747 "peer_address": { 00:14:40.747 "trtype": "TCP", 00:14:40.747 "adrfam": "IPv4", 00:14:40.747 "traddr": "10.0.0.1", 00:14:40.747 "trsvcid": "50958" 00:14:40.747 }, 00:14:40.747 "auth": { 00:14:40.747 "state": "completed", 00:14:40.747 "digest": "sha256", 00:14:40.747 "dhgroup": "ffdhe4096" 00:14:40.747 } 00:14:40.747 } 00:14:40.747 ]' 00:14:40.747 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.747 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.748 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.748 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:40.748 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.748 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.748 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.748 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.006 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:14:41.006 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:14:41.572 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.572 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:41.572 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.572 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.572 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.572 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:41.572 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.573 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:41.573 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:41.832 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:41.832 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.832 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:41.832 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:41.832 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:41.832 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.832 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.832 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.832 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.832 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.832 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.832 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.832 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.091 00:14:42.091 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.091 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.091 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.350 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.350 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.350 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.350 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.350 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.350 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.350 { 00:14:42.350 "cntlid": 33, 00:14:42.350 "qid": 0, 00:14:42.350 "state": "enabled", 00:14:42.350 "thread": "nvmf_tgt_poll_group_000", 00:14:42.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:42.350 "listen_address": { 00:14:42.350 "trtype": "TCP", 00:14:42.350 "adrfam": "IPv4", 00:14:42.350 "traddr": "10.0.0.2", 00:14:42.350 "trsvcid": "4420" 00:14:42.350 }, 00:14:42.350 "peer_address": { 00:14:42.350 "trtype": "TCP", 00:14:42.351 "adrfam": "IPv4", 00:14:42.351 "traddr": "10.0.0.1", 00:14:42.351 "trsvcid": "50970" 00:14:42.351 }, 00:14:42.351 "auth": { 00:14:42.351 "state": "completed", 00:14:42.351 "digest": "sha256", 00:14:42.351 "dhgroup": "ffdhe6144" 00:14:42.351 } 00:14:42.351 } 00:14:42.351 ]' 00:14:42.351 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.351 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.351 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.351 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:42.351 05:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.609 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.609 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.609 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.609 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:14:42.609 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:14:43.178 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.178 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:43.178 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.437 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.437 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.437 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.437 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:43.437 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:43.437 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:43.437 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.437 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:43.437 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:43.437 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:43.437 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.437 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.437 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.437 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.437 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.437 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.437 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.437 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.004 00:14:44.004 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.004 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.004 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.004 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.004 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.004 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.004 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.004 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.004 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.004 { 00:14:44.004 "cntlid": 35, 00:14:44.004 "qid": 0, 00:14:44.004 "state": "enabled", 00:14:44.004 "thread": "nvmf_tgt_poll_group_000", 00:14:44.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:44.004 "listen_address": { 00:14:44.004 "trtype": "TCP", 00:14:44.004 "adrfam": "IPv4", 00:14:44.004 "traddr": "10.0.0.2", 00:14:44.004 "trsvcid": "4420" 00:14:44.004 }, 00:14:44.004 "peer_address": { 00:14:44.004 "trtype": "TCP", 00:14:44.004 "adrfam": "IPv4", 00:14:44.004 "traddr": "10.0.0.1", 00:14:44.004 "trsvcid": "50986" 00:14:44.004 }, 00:14:44.005 "auth": { 00:14:44.005 "state": "completed", 00:14:44.005 "digest": "sha256", 00:14:44.005 "dhgroup": "ffdhe6144" 00:14:44.005 } 00:14:44.005 } 00:14:44.005 ]' 00:14:44.005 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.263 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.263 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.263 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:44.263 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.263 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.263 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.263 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.521 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:14:44.521 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:14:45.090 05:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.090 05:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:45.090 05:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.090 05:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.090 05:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.090 05:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.090 05:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:45.090 05:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:45.349 05:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:45.349 05:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.349 05:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:45.349 05:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:45.349 05:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:45.349 05:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.349 05:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.349 05:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.349 05:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.349 05:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.349 05:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.349 05:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.349 05:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.609 00:14:45.609 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.609 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.609 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:45.868 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.868 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.868 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.868 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.868 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.868 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:45.868 { 00:14:45.868 "cntlid": 37, 00:14:45.868 "qid": 0, 00:14:45.868 "state": "enabled", 00:14:45.868 "thread": "nvmf_tgt_poll_group_000", 00:14:45.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:45.868 "listen_address": { 00:14:45.868 "trtype": "TCP", 00:14:45.868 "adrfam": "IPv4", 00:14:45.868 "traddr": "10.0.0.2", 00:14:45.868 "trsvcid": "4420" 00:14:45.868 }, 00:14:45.868 "peer_address": { 00:14:45.868 "trtype": "TCP", 00:14:45.868 "adrfam": "IPv4", 00:14:45.868 "traddr": "10.0.0.1", 00:14:45.868 "trsvcid": "51024" 00:14:45.868 }, 00:14:45.868 "auth": { 00:14:45.868 "state": "completed", 00:14:45.868 "digest": "sha256", 00:14:45.868 "dhgroup": "ffdhe6144" 00:14:45.868 } 00:14:45.868 } 00:14:45.868 ]' 00:14:45.869 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:45.869 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.869 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:45.869 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:45.869 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:45.869 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.869 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.869 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.128 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:14:46.128 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:14:46.696 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.696 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:46.696 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.696 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.696 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.696 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:46.696 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:46.696 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:46.955 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:46.955 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.955 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:46.955 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:46.955 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:46.955 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.955 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:14:46.955 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.955 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.955 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.955 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:46.955 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:46.955 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:47.213 00:14:47.213 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.213 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:47.213 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.472 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.472 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.472 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.472 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.472 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.472 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.472 { 00:14:47.472 "cntlid": 39, 00:14:47.472 "qid": 0, 00:14:47.472 "state": "enabled", 00:14:47.472 "thread": "nvmf_tgt_poll_group_000", 00:14:47.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:47.472 "listen_address": { 00:14:47.472 "trtype": "TCP", 00:14:47.472 "adrfam": "IPv4", 00:14:47.472 "traddr": "10.0.0.2", 00:14:47.472 "trsvcid": "4420" 00:14:47.472 }, 00:14:47.472 "peer_address": { 00:14:47.472 "trtype": "TCP", 00:14:47.472 "adrfam": "IPv4", 00:14:47.472 "traddr": "10.0.0.1", 00:14:47.472 "trsvcid": "51068" 00:14:47.472 }, 00:14:47.472 "auth": { 00:14:47.472 "state": "completed", 00:14:47.472 "digest": "sha256", 00:14:47.472 "dhgroup": "ffdhe6144" 00:14:47.472 } 00:14:47.472 } 00:14:47.472 ]' 00:14:47.472 05:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:47.472 05:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:47.472 05:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.472 05:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:47.472 05:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.472 05:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.472 05:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.472 05:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.730 05:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:14:47.730 05:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:14:48.297 05:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.297 05:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:48.297 05:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.297 05:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.297 05:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.297 05:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:48.297 05:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.297 05:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:48.297 05:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:48.565 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:14:48.565 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.565 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:48.565 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:48.565 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:48.565 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.565 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.565 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.565 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.565 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.565 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.565 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.565 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.297 00:14:49.297 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.297 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.297 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.297 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.297 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.297 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.297 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.297 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.297 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.297 { 00:14:49.297 "cntlid": 41, 00:14:49.297 "qid": 0, 00:14:49.297 "state": "enabled", 00:14:49.297 "thread": "nvmf_tgt_poll_group_000", 00:14:49.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:49.297 "listen_address": { 00:14:49.297 "trtype": "TCP", 00:14:49.297 "adrfam": "IPv4", 00:14:49.297 "traddr": "10.0.0.2", 00:14:49.297 "trsvcid": "4420" 00:14:49.297 }, 00:14:49.297 "peer_address": { 00:14:49.297 "trtype": "TCP", 00:14:49.297 "adrfam": "IPv4", 00:14:49.297 "traddr": "10.0.0.1", 00:14:49.297 "trsvcid": "51104" 00:14:49.297 }, 00:14:49.297 "auth": { 00:14:49.297 "state": "completed", 00:14:49.297 "digest": "sha256", 00:14:49.297 "dhgroup": "ffdhe8192" 00:14:49.297 } 00:14:49.297 } 00:14:49.297 ]' 00:14:49.297 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.297 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.297 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.297 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:49.297 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.555 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.555 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.555 05:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.555 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:14:49.555 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:14:50.121 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.121 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:50.121 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.121 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.121 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.121 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.121 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:50.121 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:50.379 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:14:50.379 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.379 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:50.379 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:50.379 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:50.379 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.379 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.379 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.379 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.379 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.379 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.379 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.379 05:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.945 00:14:50.945 05:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.945 05:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.945 05:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.204 05:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.204 05:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.204 05:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.204 05:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.204 05:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.204 05:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.204 { 00:14:51.204 "cntlid": 43, 00:14:51.204 "qid": 0, 00:14:51.204 "state": "enabled", 00:14:51.204 "thread": "nvmf_tgt_poll_group_000", 00:14:51.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:51.204 "listen_address": { 00:14:51.204 "trtype": "TCP", 00:14:51.204 "adrfam": "IPv4", 00:14:51.204 "traddr": "10.0.0.2", 00:14:51.204 "trsvcid": "4420" 00:14:51.204 }, 00:14:51.204 "peer_address": { 00:14:51.204 "trtype": "TCP", 00:14:51.204 "adrfam": "IPv4", 00:14:51.204 "traddr": "10.0.0.1", 00:14:51.204 "trsvcid": "52570" 00:14:51.204 }, 00:14:51.204 "auth": { 00:14:51.204 "state": "completed", 00:14:51.204 "digest": "sha256", 00:14:51.204 "dhgroup": "ffdhe8192" 00:14:51.204 } 00:14:51.204 } 00:14:51.204 ]' 00:14:51.204 05:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.204 05:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:51.204 05:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.204 05:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:51.204 05:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.204 05:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.204 05:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.204 05:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.463 05:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:14:51.463 05:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:14:52.032 05:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.032 05:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:52.032 05:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.032 05:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.032 05:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.032 05:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.032 05:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:52.032 05:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:52.291 05:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:14:52.291 05:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.291 05:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:52.291 05:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:52.291 05:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:52.291 05:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.291 05:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.291 05:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.291 05:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.291 05:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.291 05:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.291 05:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.291 05:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.860 00:14:52.860 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.860 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.860 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.860 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.860 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.860 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.860 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.860 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.860 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.860 { 00:14:52.860 "cntlid": 45, 00:14:52.860 "qid": 0, 00:14:52.860 "state": "enabled", 00:14:52.860 "thread": "nvmf_tgt_poll_group_000", 00:14:52.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:52.860 "listen_address": { 00:14:52.860 "trtype": "TCP", 00:14:52.860 "adrfam": "IPv4", 00:14:52.860 "traddr": "10.0.0.2", 00:14:52.860 "trsvcid": "4420" 00:14:52.860 }, 00:14:52.860 "peer_address": { 00:14:52.860 "trtype": "TCP", 00:14:52.860 "adrfam": "IPv4", 00:14:52.860 "traddr": "10.0.0.1", 00:14:52.860 "trsvcid": "52594" 00:14:52.860 }, 00:14:52.860 "auth": { 00:14:52.860 "state": "completed", 00:14:52.860 "digest": "sha256", 00:14:52.860 "dhgroup": "ffdhe8192" 00:14:52.860 } 00:14:52.860 } 00:14:52.860 ]' 00:14:52.860 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.120 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:53.120 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.120 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:53.120 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.120 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.120 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.120 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.379 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:14:53.379 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:14:53.973 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.973 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:53.973 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.973 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.973 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.973 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.973 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:53.973 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:53.973 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:14:53.973 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.973 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:53.973 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:53.973 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:53.973 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.973 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:14:53.973 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.973 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.973 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.973 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:53.973 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:53.973 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:54.542 00:14:54.542 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.542 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.542 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.801 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.801 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.801 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.801 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.801 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.801 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.801 { 00:14:54.801 "cntlid": 47, 00:14:54.801 "qid": 0, 00:14:54.801 "state": "enabled", 00:14:54.801 "thread": "nvmf_tgt_poll_group_000", 00:14:54.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:54.801 "listen_address": { 00:14:54.801 "trtype": "TCP", 00:14:54.801 "adrfam": "IPv4", 00:14:54.801 "traddr": "10.0.0.2", 00:14:54.801 "trsvcid": "4420" 00:14:54.801 }, 00:14:54.801 "peer_address": { 00:14:54.801 "trtype": "TCP", 00:14:54.801 "adrfam": "IPv4", 00:14:54.801 "traddr": "10.0.0.1", 00:14:54.801 "trsvcid": "52610" 00:14:54.801 }, 00:14:54.801 "auth": { 00:14:54.801 "state": "completed", 00:14:54.801 "digest": "sha256", 00:14:54.801 "dhgroup": "ffdhe8192" 00:14:54.801 } 00:14:54.801 } 00:14:54.801 ]' 00:14:54.801 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.801 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.801 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.801 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:54.801 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.801 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.801 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.801 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.060 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:14:55.060 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:14:55.628 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.628 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:55.628 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.628 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.628 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.628 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:55.628 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:55.628 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.628 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:55.628 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:55.887 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:14:55.887 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.887 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:55.887 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:55.887 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:55.887 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.887 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.887 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.887 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.887 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.887 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.887 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.887 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.147 00:14:56.147 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.147 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.147 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.406 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.406 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.406 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.406 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.406 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.406 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.406 { 00:14:56.406 "cntlid": 49, 00:14:56.406 "qid": 0, 00:14:56.406 "state": "enabled", 00:14:56.406 "thread": "nvmf_tgt_poll_group_000", 00:14:56.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:56.406 "listen_address": { 00:14:56.406 "trtype": "TCP", 00:14:56.406 "adrfam": "IPv4", 00:14:56.406 "traddr": "10.0.0.2", 00:14:56.406 "trsvcid": "4420" 00:14:56.406 }, 00:14:56.406 "peer_address": { 00:14:56.406 "trtype": "TCP", 00:14:56.406 "adrfam": "IPv4", 00:14:56.406 "traddr": "10.0.0.1", 00:14:56.406 "trsvcid": "52648" 00:14:56.407 }, 00:14:56.407 "auth": { 00:14:56.407 "state": "completed", 00:14:56.407 "digest": "sha384", 00:14:56.407 "dhgroup": "null" 00:14:56.407 } 00:14:56.407 } 00:14:56.407 ]' 00:14:56.407 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.407 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:56.407 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.407 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:56.407 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.407 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.407 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.407 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.664 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:14:56.664 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:14:57.229 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.229 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:57.229 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.229 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.229 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.229 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.229 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:57.229 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:57.487 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:14:57.487 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.487 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:57.487 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:57.487 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:57.487 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.487 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.487 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.487 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.487 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.487 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.487 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.488 05:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.746 00:14:57.746 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.746 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.746 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.004 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.004 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.004 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.004 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.004 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.004 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.004 { 00:14:58.004 "cntlid": 51, 00:14:58.004 "qid": 0, 00:14:58.004 "state": "enabled", 00:14:58.004 "thread": "nvmf_tgt_poll_group_000", 00:14:58.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:58.004 "listen_address": { 00:14:58.004 "trtype": "TCP", 00:14:58.004 "adrfam": "IPv4", 00:14:58.004 "traddr": "10.0.0.2", 00:14:58.004 "trsvcid": "4420" 00:14:58.004 }, 00:14:58.004 "peer_address": { 00:14:58.004 "trtype": "TCP", 00:14:58.004 "adrfam": "IPv4", 00:14:58.004 "traddr": "10.0.0.1", 00:14:58.004 "trsvcid": "52682" 00:14:58.004 }, 00:14:58.004 "auth": { 00:14:58.004 "state": "completed", 00:14:58.004 "digest": "sha384", 00:14:58.004 "dhgroup": "null" 00:14:58.004 } 00:14:58.004 } 00:14:58.004 ]' 00:14:58.004 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.004 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:58.004 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.004 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:58.004 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.004 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.004 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.004 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.262 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:14:58.262 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:14:58.829 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.829 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:58.829 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.829 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.829 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.829 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.829 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:58.829 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:59.088 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:14:59.088 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.088 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:59.088 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:59.088 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:59.088 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.088 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.088 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.088 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.088 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.088 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.088 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.089 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.347 00:14:59.347 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.347 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.347 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.605 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.605 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.605 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.605 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.605 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.605 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.605 { 00:14:59.605 "cntlid": 53, 00:14:59.605 "qid": 0, 00:14:59.605 "state": "enabled", 00:14:59.605 "thread": "nvmf_tgt_poll_group_000", 00:14:59.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:59.605 "listen_address": { 00:14:59.605 "trtype": "TCP", 00:14:59.605 "adrfam": "IPv4", 00:14:59.605 "traddr": "10.0.0.2", 00:14:59.605 "trsvcid": "4420" 00:14:59.605 }, 00:14:59.605 "peer_address": { 00:14:59.605 "trtype": "TCP", 00:14:59.605 "adrfam": "IPv4", 00:14:59.605 "traddr": "10.0.0.1", 00:14:59.605 "trsvcid": "52718" 00:14:59.605 }, 00:14:59.605 "auth": { 00:14:59.605 "state": "completed", 00:14:59.605 "digest": "sha384", 00:14:59.605 "dhgroup": "null" 00:14:59.605 } 00:14:59.605 } 00:14:59.605 ]' 00:14:59.605 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.605 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:59.605 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.605 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:59.605 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.605 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.605 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.605 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.863 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:14:59.863 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:15:00.429 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.429 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:00.429 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.429 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.429 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.429 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.429 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:00.429 05:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:00.688 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:00.688 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.688 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:00.688 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:00.688 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:00.688 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.688 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:00.688 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.688 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.688 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.688 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:00.688 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:00.688 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:00.947 00:15:00.947 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.947 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.947 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.207 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.207 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.207 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.207 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.207 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.207 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.207 { 00:15:01.207 "cntlid": 55, 00:15:01.207 "qid": 0, 00:15:01.207 "state": "enabled", 00:15:01.207 "thread": "nvmf_tgt_poll_group_000", 00:15:01.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:01.207 "listen_address": { 00:15:01.207 "trtype": "TCP", 00:15:01.207 "adrfam": "IPv4", 00:15:01.207 "traddr": "10.0.0.2", 00:15:01.207 "trsvcid": "4420" 00:15:01.207 }, 00:15:01.207 "peer_address": { 00:15:01.207 "trtype": "TCP", 00:15:01.207 "adrfam": "IPv4", 00:15:01.207 "traddr": "10.0.0.1", 00:15:01.207 "trsvcid": "34134" 00:15:01.207 }, 00:15:01.207 "auth": { 00:15:01.207 "state": "completed", 00:15:01.207 "digest": "sha384", 00:15:01.207 "dhgroup": "null" 00:15:01.207 } 00:15:01.207 } 00:15:01.207 ]' 00:15:01.207 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.207 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:01.207 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.207 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:01.207 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.207 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.207 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.207 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.466 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:15:01.466 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:15:02.034 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.034 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:02.034 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.034 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.034 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.034 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:02.034 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.034 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:02.034 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:02.293 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:02.293 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.293 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:02.293 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:02.293 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:02.293 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.293 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.293 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.293 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.293 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.293 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.293 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.293 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.552 00:15:02.553 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.553 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.553 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.553 05:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.553 05:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.553 05:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.553 05:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.553 05:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.553 05:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.553 { 00:15:02.553 "cntlid": 57, 00:15:02.553 "qid": 0, 00:15:02.553 "state": "enabled", 00:15:02.553 "thread": "nvmf_tgt_poll_group_000", 00:15:02.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:02.553 "listen_address": { 00:15:02.553 "trtype": "TCP", 00:15:02.553 "adrfam": "IPv4", 00:15:02.553 "traddr": "10.0.0.2", 00:15:02.553 "trsvcid": "4420" 00:15:02.553 }, 00:15:02.553 "peer_address": { 00:15:02.553 "trtype": "TCP", 00:15:02.553 "adrfam": "IPv4", 00:15:02.553 "traddr": "10.0.0.1", 00:15:02.553 "trsvcid": "34162" 00:15:02.553 }, 00:15:02.553 "auth": { 00:15:02.553 "state": "completed", 00:15:02.553 "digest": "sha384", 00:15:02.553 "dhgroup": "ffdhe2048" 00:15:02.553 } 00:15:02.553 } 00:15:02.553 ]' 00:15:02.553 05:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.812 05:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:02.812 05:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.812 05:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:02.812 05:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.812 05:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.812 05:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.812 05:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.071 05:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:15:03.071 05:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:15:03.639 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.640 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:03.640 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.640 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.640 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.640 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.640 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:03.640 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:03.898 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:03.898 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.898 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:03.899 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:03.899 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:03.899 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.899 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.899 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.899 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.899 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.899 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.899 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.899 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.158 00:15:04.158 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.158 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.158 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.158 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.158 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.158 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.158 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.158 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.158 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.158 { 00:15:04.158 "cntlid": 59, 00:15:04.158 "qid": 0, 00:15:04.158 "state": "enabled", 00:15:04.158 "thread": "nvmf_tgt_poll_group_000", 00:15:04.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:04.158 "listen_address": { 00:15:04.158 "trtype": "TCP", 00:15:04.158 "adrfam": "IPv4", 00:15:04.158 "traddr": "10.0.0.2", 00:15:04.158 "trsvcid": "4420" 00:15:04.158 }, 00:15:04.158 "peer_address": { 00:15:04.158 "trtype": "TCP", 00:15:04.158 "adrfam": "IPv4", 00:15:04.158 "traddr": "10.0.0.1", 00:15:04.158 "trsvcid": "34186" 00:15:04.158 }, 00:15:04.158 "auth": { 00:15:04.158 "state": "completed", 00:15:04.158 "digest": "sha384", 00:15:04.158 "dhgroup": "ffdhe2048" 00:15:04.158 } 00:15:04.158 } 00:15:04.158 ]' 00:15:04.158 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.417 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:04.417 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.417 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:04.417 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.417 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.417 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.417 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.676 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:15:04.676 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:15:05.244 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.244 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:05.244 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.244 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.244 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.244 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.244 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:05.244 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:05.503 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:05.503 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.503 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:05.503 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:05.503 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:05.503 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.503 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.503 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.503 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.503 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.503 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.503 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.503 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.763 00:15:05.763 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.763 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.763 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.763 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.763 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.763 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.763 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.763 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.763 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.763 { 00:15:05.763 "cntlid": 61, 00:15:05.763 "qid": 0, 00:15:05.763 "state": "enabled", 00:15:05.763 "thread": "nvmf_tgt_poll_group_000", 00:15:05.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:05.763 "listen_address": { 00:15:05.763 "trtype": "TCP", 00:15:05.763 "adrfam": "IPv4", 00:15:05.763 "traddr": "10.0.0.2", 00:15:05.763 "trsvcid": "4420" 00:15:05.763 }, 00:15:05.763 "peer_address": { 00:15:05.763 "trtype": "TCP", 00:15:05.763 "adrfam": "IPv4", 00:15:05.763 "traddr": "10.0.0.1", 00:15:05.763 "trsvcid": "34206" 00:15:05.763 }, 00:15:05.763 "auth": { 00:15:05.763 "state": "completed", 00:15:05.763 "digest": "sha384", 00:15:05.763 "dhgroup": "ffdhe2048" 00:15:05.763 } 00:15:05.763 } 00:15:05.763 ]' 00:15:05.763 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.021 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:06.021 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.021 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:06.021 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.021 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.021 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.021 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.279 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:15:06.279 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:15:06.846 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.846 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:06.846 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.846 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.846 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.846 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.846 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:06.846 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:07.105 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:07.105 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.105 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:07.105 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:07.105 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:07.105 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.105 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:07.105 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.105 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.105 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.105 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:07.105 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:07.105 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:07.363 00:15:07.363 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.363 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.363 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.363 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.363 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.363 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.363 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.363 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.363 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.363 { 00:15:07.363 "cntlid": 63, 00:15:07.363 "qid": 0, 00:15:07.363 "state": "enabled", 00:15:07.363 "thread": "nvmf_tgt_poll_group_000", 00:15:07.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:07.363 "listen_address": { 00:15:07.363 "trtype": "TCP", 00:15:07.363 "adrfam": "IPv4", 00:15:07.363 "traddr": "10.0.0.2", 00:15:07.363 "trsvcid": "4420" 00:15:07.363 }, 00:15:07.363 "peer_address": { 00:15:07.363 "trtype": "TCP", 00:15:07.363 "adrfam": "IPv4", 00:15:07.363 "traddr": "10.0.0.1", 00:15:07.363 "trsvcid": "34246" 00:15:07.363 }, 00:15:07.363 "auth": { 00:15:07.363 "state": "completed", 00:15:07.363 "digest": "sha384", 00:15:07.363 "dhgroup": "ffdhe2048" 00:15:07.363 } 00:15:07.363 } 00:15:07.363 ]' 00:15:07.363 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.622 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:07.622 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.622 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:07.622 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.622 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.622 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.622 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.881 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:15:07.881 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:15:08.457 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.457 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:08.457 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.457 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.457 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.457 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.457 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.457 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:08.457 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:08.457 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:08.457 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.457 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:08.457 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:08.457 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:08.457 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.457 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.457 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.457 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.457 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.457 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.457 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.457 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.715 00:15:08.715 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.715 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.715 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.973 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.973 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.973 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.973 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.973 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.973 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.973 { 00:15:08.973 "cntlid": 65, 00:15:08.973 "qid": 0, 00:15:08.973 "state": "enabled", 00:15:08.973 "thread": "nvmf_tgt_poll_group_000", 00:15:08.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:08.973 "listen_address": { 00:15:08.973 "trtype": "TCP", 00:15:08.973 "adrfam": "IPv4", 00:15:08.973 "traddr": "10.0.0.2", 00:15:08.973 "trsvcid": "4420" 00:15:08.973 }, 00:15:08.973 "peer_address": { 00:15:08.973 "trtype": "TCP", 00:15:08.973 "adrfam": "IPv4", 00:15:08.973 "traddr": "10.0.0.1", 00:15:08.973 "trsvcid": "34272" 00:15:08.973 }, 00:15:08.973 "auth": { 00:15:08.973 "state": "completed", 00:15:08.973 "digest": "sha384", 00:15:08.973 "dhgroup": "ffdhe3072" 00:15:08.973 } 00:15:08.973 } 00:15:08.973 ]' 00:15:08.973 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.973 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:08.973 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.231 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:09.231 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.231 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.231 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.231 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.488 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:15:09.488 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:15:10.056 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.056 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:10.056 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.056 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.056 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.056 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.056 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:10.056 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:10.056 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:10.056 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.056 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:10.056 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:10.056 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:10.056 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.056 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.056 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.056 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.056 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.056 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.056 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.056 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.315 00:15:10.315 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.315 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.315 05:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.576 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.576 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.576 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.576 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.576 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.576 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.576 { 00:15:10.576 "cntlid": 67, 00:15:10.576 "qid": 0, 00:15:10.576 "state": "enabled", 00:15:10.576 "thread": "nvmf_tgt_poll_group_000", 00:15:10.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:10.576 "listen_address": { 00:15:10.576 "trtype": "TCP", 00:15:10.576 "adrfam": "IPv4", 00:15:10.576 "traddr": "10.0.0.2", 00:15:10.576 "trsvcid": "4420" 00:15:10.576 }, 00:15:10.576 "peer_address": { 00:15:10.576 "trtype": "TCP", 00:15:10.576 "adrfam": "IPv4", 00:15:10.576 "traddr": "10.0.0.1", 00:15:10.576 "trsvcid": "49992" 00:15:10.576 }, 00:15:10.576 "auth": { 00:15:10.576 "state": "completed", 00:15:10.576 "digest": "sha384", 00:15:10.576 "dhgroup": "ffdhe3072" 00:15:10.576 } 00:15:10.576 } 00:15:10.576 ]' 00:15:10.576 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.576 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.576 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.835 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:10.835 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.835 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.835 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.835 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.095 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:15:11.095 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:15:11.663 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.663 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:11.663 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.663 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.663 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.663 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.663 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:11.663 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:11.663 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:11.663 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.663 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:11.663 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:11.663 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:11.663 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.663 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.663 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.663 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.663 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.663 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.663 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.663 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.922 00:15:11.922 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.922 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.922 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.181 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.181 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.181 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.181 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.181 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.181 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.181 { 00:15:12.181 "cntlid": 69, 00:15:12.181 "qid": 0, 00:15:12.181 "state": "enabled", 00:15:12.181 "thread": "nvmf_tgt_poll_group_000", 00:15:12.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:12.181 "listen_address": { 00:15:12.181 "trtype": "TCP", 00:15:12.181 "adrfam": "IPv4", 00:15:12.181 "traddr": "10.0.0.2", 00:15:12.181 "trsvcid": "4420" 00:15:12.181 }, 00:15:12.181 "peer_address": { 00:15:12.181 "trtype": "TCP", 00:15:12.181 "adrfam": "IPv4", 00:15:12.181 "traddr": "10.0.0.1", 00:15:12.181 "trsvcid": "50008" 00:15:12.181 }, 00:15:12.181 "auth": { 00:15:12.181 "state": "completed", 00:15:12.181 "digest": "sha384", 00:15:12.181 "dhgroup": "ffdhe3072" 00:15:12.181 } 00:15:12.181 } 00:15:12.181 ]' 00:15:12.181 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.181 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.181 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.441 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:12.441 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.441 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.441 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.441 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.441 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:15:12.441 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:15:13.033 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.033 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:13.033 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.033 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.033 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.033 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.033 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:13.033 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:13.292 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:13.292 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.292 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:13.292 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:13.292 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:13.292 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.292 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:13.292 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.292 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.292 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.292 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:13.292 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.292 05:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.551 00:15:13.551 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.551 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.551 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.810 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.810 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.810 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.810 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.810 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.810 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.810 { 00:15:13.810 "cntlid": 71, 00:15:13.810 "qid": 0, 00:15:13.810 "state": "enabled", 00:15:13.810 "thread": "nvmf_tgt_poll_group_000", 00:15:13.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:13.810 "listen_address": { 00:15:13.810 "trtype": "TCP", 00:15:13.810 "adrfam": "IPv4", 00:15:13.810 "traddr": "10.0.0.2", 00:15:13.810 "trsvcid": "4420" 00:15:13.810 }, 00:15:13.810 "peer_address": { 00:15:13.810 "trtype": "TCP", 00:15:13.810 "adrfam": "IPv4", 00:15:13.810 "traddr": "10.0.0.1", 00:15:13.810 "trsvcid": "50026" 00:15:13.810 }, 00:15:13.810 "auth": { 00:15:13.810 "state": "completed", 00:15:13.810 "digest": "sha384", 00:15:13.810 "dhgroup": "ffdhe3072" 00:15:13.810 } 00:15:13.810 } 00:15:13.810 ]' 00:15:13.810 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.810 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:13.810 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.810 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:13.810 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.069 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.069 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.069 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.069 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:15:14.069 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:15:14.636 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.636 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:14.636 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.636 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.636 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.636 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:14.636 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.636 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:14.637 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:14.895 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:14.895 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.895 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:14.895 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:14.895 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:14.895 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.895 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.895 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.895 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.895 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.895 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.895 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.895 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.154 00:15:15.154 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.154 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.154 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.413 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.414 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.414 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.414 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.414 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.414 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.414 { 00:15:15.414 "cntlid": 73, 00:15:15.414 "qid": 0, 00:15:15.414 "state": "enabled", 00:15:15.414 "thread": "nvmf_tgt_poll_group_000", 00:15:15.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:15.414 "listen_address": { 00:15:15.414 "trtype": "TCP", 00:15:15.414 "adrfam": "IPv4", 00:15:15.414 "traddr": "10.0.0.2", 00:15:15.414 "trsvcid": "4420" 00:15:15.414 }, 00:15:15.414 "peer_address": { 00:15:15.414 "trtype": "TCP", 00:15:15.414 "adrfam": "IPv4", 00:15:15.414 "traddr": "10.0.0.1", 00:15:15.414 "trsvcid": "50070" 00:15:15.414 }, 00:15:15.414 "auth": { 00:15:15.414 "state": "completed", 00:15:15.414 "digest": "sha384", 00:15:15.414 "dhgroup": "ffdhe4096" 00:15:15.414 } 00:15:15.414 } 00:15:15.414 ]' 00:15:15.414 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.414 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:15.414 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.414 05:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:15.414 05:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.414 05:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.414 05:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.414 05:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.673 05:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:15:15.673 05:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:15:16.239 05:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.239 05:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:16.239 05:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.239 05:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.239 05:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.239 05:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.239 05:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:16.239 05:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:16.498 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:16.498 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.498 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:16.498 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:16.498 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:16.498 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.498 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.498 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.498 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.498 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.498 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.498 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.498 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.756 00:15:16.756 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.756 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.756 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.014 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.014 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.014 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.014 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.014 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.014 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.014 { 00:15:17.014 "cntlid": 75, 00:15:17.014 "qid": 0, 00:15:17.014 "state": "enabled", 00:15:17.014 "thread": "nvmf_tgt_poll_group_000", 00:15:17.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:17.014 "listen_address": { 00:15:17.014 "trtype": "TCP", 00:15:17.014 "adrfam": "IPv4", 00:15:17.014 "traddr": "10.0.0.2", 00:15:17.014 "trsvcid": "4420" 00:15:17.014 }, 00:15:17.014 "peer_address": { 00:15:17.014 "trtype": "TCP", 00:15:17.014 "adrfam": "IPv4", 00:15:17.014 "traddr": "10.0.0.1", 00:15:17.014 "trsvcid": "50106" 00:15:17.014 }, 00:15:17.014 "auth": { 00:15:17.014 "state": "completed", 00:15:17.014 "digest": "sha384", 00:15:17.014 "dhgroup": "ffdhe4096" 00:15:17.014 } 00:15:17.014 } 00:15:17.014 ]' 00:15:17.014 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.015 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.015 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.015 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:17.015 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.273 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.273 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.273 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.273 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:15:17.273 05:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:15:17.839 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.839 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:17.839 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.839 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.839 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.839 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.839 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:17.839 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:18.097 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:18.098 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.098 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:18.098 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:18.098 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:18.098 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.098 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.098 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.098 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.098 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.098 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.098 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.098 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.356 00:15:18.356 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.356 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.356 05:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.643 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.643 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.643 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.643 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.643 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.643 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.643 { 00:15:18.643 "cntlid": 77, 00:15:18.643 "qid": 0, 00:15:18.643 "state": "enabled", 00:15:18.643 "thread": "nvmf_tgt_poll_group_000", 00:15:18.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:18.643 "listen_address": { 00:15:18.643 "trtype": "TCP", 00:15:18.643 "adrfam": "IPv4", 00:15:18.643 "traddr": "10.0.0.2", 00:15:18.643 "trsvcid": "4420" 00:15:18.643 }, 00:15:18.643 "peer_address": { 00:15:18.643 "trtype": "TCP", 00:15:18.643 "adrfam": "IPv4", 00:15:18.643 "traddr": "10.0.0.1", 00:15:18.643 "trsvcid": "50134" 00:15:18.643 }, 00:15:18.643 "auth": { 00:15:18.643 "state": "completed", 00:15:18.643 "digest": "sha384", 00:15:18.643 "dhgroup": "ffdhe4096" 00:15:18.643 } 00:15:18.643 } 00:15:18.643 ]' 00:15:18.643 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.643 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.643 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.643 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:18.643 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.643 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.643 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.643 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.901 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:15:18.901 05:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:15:19.468 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.468 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:19.468 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.468 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.468 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.468 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.468 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:19.468 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:19.727 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:19.727 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.727 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:19.727 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:19.727 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:19.727 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.727 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:19.727 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.727 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.727 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.727 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:19.727 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.727 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.998 00:15:19.998 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.998 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.998 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.257 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.257 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.257 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.257 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.257 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.257 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.257 { 00:15:20.257 "cntlid": 79, 00:15:20.257 "qid": 0, 00:15:20.257 "state": "enabled", 00:15:20.257 "thread": "nvmf_tgt_poll_group_000", 00:15:20.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:20.257 "listen_address": { 00:15:20.257 "trtype": "TCP", 00:15:20.257 "adrfam": "IPv4", 00:15:20.257 "traddr": "10.0.0.2", 00:15:20.257 "trsvcid": "4420" 00:15:20.257 }, 00:15:20.257 "peer_address": { 00:15:20.257 "trtype": "TCP", 00:15:20.257 "adrfam": "IPv4", 00:15:20.257 "traddr": "10.0.0.1", 00:15:20.257 "trsvcid": "35014" 00:15:20.257 }, 00:15:20.257 "auth": { 00:15:20.257 "state": "completed", 00:15:20.257 "digest": "sha384", 00:15:20.257 "dhgroup": "ffdhe4096" 00:15:20.257 } 00:15:20.257 } 00:15:20.257 ]' 00:15:20.257 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.257 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.257 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.257 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:20.257 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.257 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.257 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.257 05:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.516 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:15:20.516 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:15:21.084 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.084 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:21.084 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.084 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.084 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.084 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:21.084 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.084 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:21.084 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:21.343 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:21.343 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.343 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:21.343 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:21.343 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:21.343 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.343 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.343 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.343 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.343 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.343 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.343 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.343 05:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.602 00:15:21.602 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.602 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.602 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.861 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.861 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.861 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.861 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.861 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.861 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.861 { 00:15:21.861 "cntlid": 81, 00:15:21.861 "qid": 0, 00:15:21.861 "state": "enabled", 00:15:21.861 "thread": "nvmf_tgt_poll_group_000", 00:15:21.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:21.861 "listen_address": { 00:15:21.861 "trtype": "TCP", 00:15:21.861 "adrfam": "IPv4", 00:15:21.861 "traddr": "10.0.0.2", 00:15:21.861 "trsvcid": "4420" 00:15:21.861 }, 00:15:21.861 "peer_address": { 00:15:21.861 "trtype": "TCP", 00:15:21.861 "adrfam": "IPv4", 00:15:21.861 "traddr": "10.0.0.1", 00:15:21.861 "trsvcid": "35042" 00:15:21.861 }, 00:15:21.861 "auth": { 00:15:21.861 "state": "completed", 00:15:21.861 "digest": "sha384", 00:15:21.861 "dhgroup": "ffdhe6144" 00:15:21.861 } 00:15:21.861 } 00:15:21.861 ]' 00:15:21.861 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.861 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.861 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.861 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:21.861 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.121 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.121 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.121 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.121 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:15:22.121 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:15:22.689 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.689 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:22.689 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.689 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.689 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.689 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.689 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:22.689 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:22.948 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:22.948 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.948 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:22.948 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:22.948 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:22.948 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.948 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.948 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.948 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.948 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.948 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.948 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.948 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.517 00:15:23.517 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.517 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.517 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.517 05:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.517 05:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.517 05:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.517 05:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.517 05:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.517 05:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.517 { 00:15:23.517 "cntlid": 83, 00:15:23.517 "qid": 0, 00:15:23.517 "state": "enabled", 00:15:23.517 "thread": "nvmf_tgt_poll_group_000", 00:15:23.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:23.517 "listen_address": { 00:15:23.517 "trtype": "TCP", 00:15:23.517 "adrfam": "IPv4", 00:15:23.517 "traddr": "10.0.0.2", 00:15:23.517 "trsvcid": "4420" 00:15:23.517 }, 00:15:23.517 "peer_address": { 00:15:23.517 "trtype": "TCP", 00:15:23.517 "adrfam": "IPv4", 00:15:23.517 "traddr": "10.0.0.1", 00:15:23.517 "trsvcid": "35058" 00:15:23.517 }, 00:15:23.517 "auth": { 00:15:23.517 "state": "completed", 00:15:23.517 "digest": "sha384", 00:15:23.517 "dhgroup": "ffdhe6144" 00:15:23.517 } 00:15:23.517 } 00:15:23.517 ]' 00:15:23.517 05:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.776 05:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.776 05:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.776 05:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:23.776 05:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.776 05:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.776 05:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.776 05:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.035 05:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:15:24.035 05:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:15:24.603 05:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.603 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:24.603 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.603 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.603 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.603 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.603 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:24.603 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:24.603 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:24.603 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.603 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:24.603 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:24.603 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:24.603 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.603 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.603 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.603 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.603 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.603 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.603 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.603 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.171 00:15:25.171 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.171 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.171 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.171 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.171 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.171 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.171 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.171 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.171 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.171 { 00:15:25.171 "cntlid": 85, 00:15:25.171 "qid": 0, 00:15:25.171 "state": "enabled", 00:15:25.171 "thread": "nvmf_tgt_poll_group_000", 00:15:25.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:25.171 "listen_address": { 00:15:25.171 "trtype": "TCP", 00:15:25.171 "adrfam": "IPv4", 00:15:25.171 "traddr": "10.0.0.2", 00:15:25.171 "trsvcid": "4420" 00:15:25.171 }, 00:15:25.171 "peer_address": { 00:15:25.171 "trtype": "TCP", 00:15:25.171 "adrfam": "IPv4", 00:15:25.171 "traddr": "10.0.0.1", 00:15:25.171 "trsvcid": "35072" 00:15:25.171 }, 00:15:25.171 "auth": { 00:15:25.171 "state": "completed", 00:15:25.171 "digest": "sha384", 00:15:25.171 "dhgroup": "ffdhe6144" 00:15:25.171 } 00:15:25.171 } 00:15:25.171 ]' 00:15:25.171 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.429 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:25.429 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.429 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:25.429 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.429 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.429 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.429 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.687 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:15:25.687 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:15:26.253 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.253 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:26.253 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.253 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.253 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.253 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.253 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:26.253 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:26.253 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:26.253 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.253 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:26.253 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:26.253 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:26.253 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.253 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:26.253 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.253 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.511 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.511 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:26.511 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:26.511 05:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:26.868 00:15:26.868 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.868 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.868 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.868 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.868 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.152 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.152 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.152 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.152 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.152 { 00:15:27.152 "cntlid": 87, 00:15:27.152 "qid": 0, 00:15:27.152 "state": "enabled", 00:15:27.152 "thread": "nvmf_tgt_poll_group_000", 00:15:27.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:27.152 "listen_address": { 00:15:27.152 "trtype": "TCP", 00:15:27.152 "adrfam": "IPv4", 00:15:27.152 "traddr": "10.0.0.2", 00:15:27.152 "trsvcid": "4420" 00:15:27.152 }, 00:15:27.152 "peer_address": { 00:15:27.152 "trtype": "TCP", 00:15:27.152 "adrfam": "IPv4", 00:15:27.152 "traddr": "10.0.0.1", 00:15:27.152 "trsvcid": "35106" 00:15:27.152 }, 00:15:27.152 "auth": { 00:15:27.152 "state": "completed", 00:15:27.152 "digest": "sha384", 00:15:27.152 "dhgroup": "ffdhe6144" 00:15:27.152 } 00:15:27.152 } 00:15:27.152 ]' 00:15:27.152 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.152 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:27.152 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.152 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:27.152 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.152 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.152 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.152 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.411 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:15:27.411 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:15:27.978 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.978 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:27.978 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.978 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.978 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.978 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:27.978 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.978 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:27.978 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:27.978 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:27.978 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.978 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:27.978 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:27.978 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:27.978 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.978 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.978 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.978 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.236 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.236 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.236 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.236 05:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.494 00:15:28.494 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.494 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.494 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.752 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.752 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.752 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.752 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.752 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.752 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.752 { 00:15:28.752 "cntlid": 89, 00:15:28.752 "qid": 0, 00:15:28.752 "state": "enabled", 00:15:28.752 "thread": "nvmf_tgt_poll_group_000", 00:15:28.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:28.752 "listen_address": { 00:15:28.752 "trtype": "TCP", 00:15:28.752 "adrfam": "IPv4", 00:15:28.752 "traddr": "10.0.0.2", 00:15:28.752 "trsvcid": "4420" 00:15:28.752 }, 00:15:28.752 "peer_address": { 00:15:28.752 "trtype": "TCP", 00:15:28.752 "adrfam": "IPv4", 00:15:28.752 "traddr": "10.0.0.1", 00:15:28.752 "trsvcid": "35134" 00:15:28.752 }, 00:15:28.752 "auth": { 00:15:28.752 "state": "completed", 00:15:28.752 "digest": "sha384", 00:15:28.752 "dhgroup": "ffdhe8192" 00:15:28.752 } 00:15:28.752 } 00:15:28.752 ]' 00:15:28.752 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.752 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.752 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.011 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:29.011 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.011 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.011 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.011 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.011 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:15:29.011 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:15:29.578 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.578 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:29.578 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.578 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.836 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.836 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.836 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:29.836 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:29.837 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:15:29.837 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.837 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:29.837 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:29.837 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:29.837 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.837 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.837 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.837 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.837 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.837 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.837 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.837 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.405 00:15:30.405 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.405 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.405 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.665 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.665 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.665 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.665 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.665 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.665 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.665 { 00:15:30.665 "cntlid": 91, 00:15:30.665 "qid": 0, 00:15:30.665 "state": "enabled", 00:15:30.665 "thread": "nvmf_tgt_poll_group_000", 00:15:30.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:30.665 "listen_address": { 00:15:30.665 "trtype": "TCP", 00:15:30.665 "adrfam": "IPv4", 00:15:30.665 "traddr": "10.0.0.2", 00:15:30.665 "trsvcid": "4420" 00:15:30.665 }, 00:15:30.665 "peer_address": { 00:15:30.665 "trtype": "TCP", 00:15:30.665 "adrfam": "IPv4", 00:15:30.665 "traddr": "10.0.0.1", 00:15:30.665 "trsvcid": "56892" 00:15:30.665 }, 00:15:30.665 "auth": { 00:15:30.665 "state": "completed", 00:15:30.665 "digest": "sha384", 00:15:30.665 "dhgroup": "ffdhe8192" 00:15:30.665 } 00:15:30.665 } 00:15:30.665 ]' 00:15:30.665 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.665 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.665 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.665 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:30.665 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.665 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.665 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.665 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.923 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:15:30.923 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:15:31.490 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.490 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:31.490 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.490 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.490 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.490 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.490 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:31.490 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:31.749 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:15:31.749 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.749 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:31.749 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:31.749 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:31.749 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.749 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.749 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.749 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.749 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.749 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.749 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.749 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.317 00:15:32.317 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.317 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.317 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.317 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.317 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.317 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.317 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.317 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.317 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.317 { 00:15:32.317 "cntlid": 93, 00:15:32.317 "qid": 0, 00:15:32.317 "state": "enabled", 00:15:32.317 "thread": "nvmf_tgt_poll_group_000", 00:15:32.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:32.317 "listen_address": { 00:15:32.317 "trtype": "TCP", 00:15:32.317 "adrfam": "IPv4", 00:15:32.317 "traddr": "10.0.0.2", 00:15:32.317 "trsvcid": "4420" 00:15:32.317 }, 00:15:32.317 "peer_address": { 00:15:32.317 "trtype": "TCP", 00:15:32.317 "adrfam": "IPv4", 00:15:32.317 "traddr": "10.0.0.1", 00:15:32.317 "trsvcid": "56910" 00:15:32.317 }, 00:15:32.317 "auth": { 00:15:32.317 "state": "completed", 00:15:32.317 "digest": "sha384", 00:15:32.318 "dhgroup": "ffdhe8192" 00:15:32.318 } 00:15:32.318 } 00:15:32.318 ]' 00:15:32.318 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.577 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.577 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.577 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:32.577 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.577 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.577 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.577 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.837 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:15:32.837 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:15:33.406 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.406 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:33.406 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.406 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.406 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.406 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.406 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:33.406 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:33.406 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:15:33.406 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.406 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:33.406 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:33.406 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:33.406 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.406 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:33.407 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.407 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.407 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.407 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:33.407 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:33.407 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:33.976 00:15:33.976 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.976 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.976 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.236 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.236 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.236 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.236 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.236 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.236 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.236 { 00:15:34.236 "cntlid": 95, 00:15:34.236 "qid": 0, 00:15:34.236 "state": "enabled", 00:15:34.236 "thread": "nvmf_tgt_poll_group_000", 00:15:34.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:34.236 "listen_address": { 00:15:34.236 "trtype": "TCP", 00:15:34.236 "adrfam": "IPv4", 00:15:34.236 "traddr": "10.0.0.2", 00:15:34.236 "trsvcid": "4420" 00:15:34.236 }, 00:15:34.236 "peer_address": { 00:15:34.236 "trtype": "TCP", 00:15:34.236 "adrfam": "IPv4", 00:15:34.236 "traddr": "10.0.0.1", 00:15:34.236 "trsvcid": "56936" 00:15:34.236 }, 00:15:34.236 "auth": { 00:15:34.236 "state": "completed", 00:15:34.236 "digest": "sha384", 00:15:34.236 "dhgroup": "ffdhe8192" 00:15:34.236 } 00:15:34.236 } 00:15:34.236 ]' 00:15:34.236 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.236 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.236 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.236 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:34.236 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.494 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.494 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.494 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.494 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:15:34.494 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:15:35.060 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.060 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:35.060 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.060 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.060 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.060 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:35.060 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:35.060 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.060 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:35.060 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:35.318 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:15:35.318 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.318 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:35.318 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:35.318 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:35.318 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.318 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.318 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.318 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.318 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.318 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.318 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.318 05:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.576 00:15:35.576 05:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.576 05:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.576 05:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.835 05:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.835 05:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.835 05:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.835 05:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.835 05:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.835 05:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.835 { 00:15:35.835 "cntlid": 97, 00:15:35.835 "qid": 0, 00:15:35.835 "state": "enabled", 00:15:35.835 "thread": "nvmf_tgt_poll_group_000", 00:15:35.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:35.836 "listen_address": { 00:15:35.836 "trtype": "TCP", 00:15:35.836 "adrfam": "IPv4", 00:15:35.836 "traddr": "10.0.0.2", 00:15:35.836 "trsvcid": "4420" 00:15:35.836 }, 00:15:35.836 "peer_address": { 00:15:35.836 "trtype": "TCP", 00:15:35.836 "adrfam": "IPv4", 00:15:35.836 "traddr": "10.0.0.1", 00:15:35.836 "trsvcid": "56968" 00:15:35.836 }, 00:15:35.836 "auth": { 00:15:35.836 "state": "completed", 00:15:35.836 "digest": "sha512", 00:15:35.836 "dhgroup": "null" 00:15:35.836 } 00:15:35.836 } 00:15:35.836 ]' 00:15:35.836 05:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.836 05:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:35.836 05:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.836 05:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:35.836 05:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.836 05:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.836 05:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.836 05:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.095 05:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:15:36.095 05:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:15:36.680 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.680 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:36.680 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.680 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.680 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.680 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.680 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:36.680 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:36.939 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:15:36.939 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.939 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:36.939 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:36.939 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:36.939 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.939 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.939 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.939 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.939 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.939 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.939 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.939 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.210 00:15:37.210 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.210 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.210 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.470 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.470 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.470 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.470 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.470 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.470 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.470 { 00:15:37.470 "cntlid": 99, 00:15:37.470 "qid": 0, 00:15:37.470 "state": "enabled", 00:15:37.470 "thread": "nvmf_tgt_poll_group_000", 00:15:37.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:37.470 "listen_address": { 00:15:37.470 "trtype": "TCP", 00:15:37.470 "adrfam": "IPv4", 00:15:37.470 "traddr": "10.0.0.2", 00:15:37.470 "trsvcid": "4420" 00:15:37.470 }, 00:15:37.470 "peer_address": { 00:15:37.470 "trtype": "TCP", 00:15:37.470 "adrfam": "IPv4", 00:15:37.470 "traddr": "10.0.0.1", 00:15:37.470 "trsvcid": "57004" 00:15:37.470 }, 00:15:37.470 "auth": { 00:15:37.470 "state": "completed", 00:15:37.470 "digest": "sha512", 00:15:37.470 "dhgroup": "null" 00:15:37.470 } 00:15:37.470 } 00:15:37.470 ]' 00:15:37.470 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.470 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:37.470 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.470 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:37.470 05:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.470 05:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.470 05:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.470 05:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.727 05:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:15:37.727 05:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:15:38.292 05:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.292 05:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.292 05:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.292 05:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.292 05:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.292 05:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.292 05:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:38.292 05:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:38.550 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:15:38.550 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.550 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:38.550 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:38.550 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:38.550 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.550 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.550 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.550 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.550 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.550 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.550 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.550 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.808 00:15:38.808 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.808 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.808 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.066 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.066 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.066 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.066 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.066 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.066 { 00:15:39.066 "cntlid": 101, 00:15:39.066 "qid": 0, 00:15:39.066 "state": "enabled", 00:15:39.066 "thread": "nvmf_tgt_poll_group_000", 00:15:39.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:39.066 "listen_address": { 00:15:39.066 "trtype": "TCP", 00:15:39.066 "adrfam": "IPv4", 00:15:39.066 "traddr": "10.0.0.2", 00:15:39.066 "trsvcid": "4420" 00:15:39.066 }, 00:15:39.066 "peer_address": { 00:15:39.066 "trtype": "TCP", 00:15:39.066 "adrfam": "IPv4", 00:15:39.066 "traddr": "10.0.0.1", 00:15:39.066 "trsvcid": "57036" 00:15:39.066 }, 00:15:39.066 "auth": { 00:15:39.066 "state": "completed", 00:15:39.066 "digest": "sha512", 00:15:39.066 "dhgroup": "null" 00:15:39.066 } 00:15:39.066 } 00:15:39.066 ]' 00:15:39.066 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.066 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:39.066 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.066 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:39.066 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.066 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.066 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.066 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.324 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:15:39.324 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:15:39.891 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.891 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:39.891 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.891 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.891 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.891 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.891 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:39.891 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:40.150 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:15:40.150 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.150 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:40.150 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:40.150 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:40.150 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.150 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:40.150 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.150 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.150 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.150 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:40.150 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.150 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.409 00:15:40.409 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.409 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.409 05:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.409 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.409 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.409 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.409 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.668 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.668 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.668 { 00:15:40.668 "cntlid": 103, 00:15:40.668 "qid": 0, 00:15:40.668 "state": "enabled", 00:15:40.668 "thread": "nvmf_tgt_poll_group_000", 00:15:40.668 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:40.668 "listen_address": { 00:15:40.668 "trtype": "TCP", 00:15:40.668 "adrfam": "IPv4", 00:15:40.668 "traddr": "10.0.0.2", 00:15:40.668 "trsvcid": "4420" 00:15:40.668 }, 00:15:40.668 "peer_address": { 00:15:40.668 "trtype": "TCP", 00:15:40.668 "adrfam": "IPv4", 00:15:40.668 "traddr": "10.0.0.1", 00:15:40.668 "trsvcid": "43458" 00:15:40.668 }, 00:15:40.668 "auth": { 00:15:40.668 "state": "completed", 00:15:40.668 "digest": "sha512", 00:15:40.668 "dhgroup": "null" 00:15:40.668 } 00:15:40.668 } 00:15:40.668 ]' 00:15:40.668 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.668 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:40.668 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.668 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:40.668 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.668 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.668 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.669 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.927 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:15:40.928 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:15:41.496 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.496 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:41.496 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.496 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.496 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.496 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:41.496 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.496 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:41.496 05:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:41.755 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:15:41.755 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.755 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:41.755 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:41.755 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:41.755 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.755 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.755 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.755 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.755 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.755 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.755 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.755 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.755 00:15:42.014 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.014 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.014 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.014 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.014 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.014 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.014 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.014 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.014 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.014 { 00:15:42.014 "cntlid": 105, 00:15:42.014 "qid": 0, 00:15:42.014 "state": "enabled", 00:15:42.014 "thread": "nvmf_tgt_poll_group_000", 00:15:42.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:42.014 "listen_address": { 00:15:42.014 "trtype": "TCP", 00:15:42.014 "adrfam": "IPv4", 00:15:42.014 "traddr": "10.0.0.2", 00:15:42.014 "trsvcid": "4420" 00:15:42.014 }, 00:15:42.014 "peer_address": { 00:15:42.014 "trtype": "TCP", 00:15:42.014 "adrfam": "IPv4", 00:15:42.014 "traddr": "10.0.0.1", 00:15:42.014 "trsvcid": "43478" 00:15:42.014 }, 00:15:42.014 "auth": { 00:15:42.014 "state": "completed", 00:15:42.014 "digest": "sha512", 00:15:42.014 "dhgroup": "ffdhe2048" 00:15:42.014 } 00:15:42.014 } 00:15:42.014 ]' 00:15:42.014 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.014 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:42.014 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.274 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:42.274 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.274 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.274 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.274 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.274 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:15:42.274 05:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:15:42.842 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.101 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:43.101 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.101 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.101 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.101 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.101 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:43.101 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:43.101 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:15:43.101 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.101 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:43.101 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:43.101 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:43.101 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.101 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.101 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.101 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.101 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.101 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.101 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.101 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.360 00:15:43.360 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.360 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.360 05:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.618 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.618 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.618 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.618 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.618 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.618 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.618 { 00:15:43.618 "cntlid": 107, 00:15:43.618 "qid": 0, 00:15:43.618 "state": "enabled", 00:15:43.618 "thread": "nvmf_tgt_poll_group_000", 00:15:43.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:43.618 "listen_address": { 00:15:43.618 "trtype": "TCP", 00:15:43.618 "adrfam": "IPv4", 00:15:43.618 "traddr": "10.0.0.2", 00:15:43.618 "trsvcid": "4420" 00:15:43.618 }, 00:15:43.618 "peer_address": { 00:15:43.618 "trtype": "TCP", 00:15:43.618 "adrfam": "IPv4", 00:15:43.618 "traddr": "10.0.0.1", 00:15:43.618 "trsvcid": "43504" 00:15:43.618 }, 00:15:43.618 "auth": { 00:15:43.618 "state": "completed", 00:15:43.618 "digest": "sha512", 00:15:43.618 "dhgroup": "ffdhe2048" 00:15:43.618 } 00:15:43.618 } 00:15:43.618 ]' 00:15:43.618 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.619 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:43.619 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.876 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:43.876 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.876 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.876 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.876 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.133 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:15:44.133 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:15:44.698 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.698 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:44.698 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.698 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.698 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.698 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.698 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:44.698 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:44.698 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:15:44.698 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.698 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:44.698 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:44.698 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:44.698 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.698 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.698 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.698 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.698 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.698 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.698 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.698 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.956 00:15:44.956 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.956 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.956 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.214 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.214 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.214 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.214 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.214 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.214 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.214 { 00:15:45.214 "cntlid": 109, 00:15:45.214 "qid": 0, 00:15:45.214 "state": "enabled", 00:15:45.214 "thread": "nvmf_tgt_poll_group_000", 00:15:45.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:45.214 "listen_address": { 00:15:45.214 "trtype": "TCP", 00:15:45.214 "adrfam": "IPv4", 00:15:45.214 "traddr": "10.0.0.2", 00:15:45.214 "trsvcid": "4420" 00:15:45.214 }, 00:15:45.214 "peer_address": { 00:15:45.214 "trtype": "TCP", 00:15:45.214 "adrfam": "IPv4", 00:15:45.214 "traddr": "10.0.0.1", 00:15:45.214 "trsvcid": "43528" 00:15:45.214 }, 00:15:45.214 "auth": { 00:15:45.214 "state": "completed", 00:15:45.214 "digest": "sha512", 00:15:45.214 "dhgroup": "ffdhe2048" 00:15:45.214 } 00:15:45.214 } 00:15:45.214 ]' 00:15:45.214 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.214 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:45.214 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.471 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:45.471 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.471 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.471 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.471 05:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.728 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:15:45.728 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:15:46.294 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.294 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.294 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.294 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.294 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.294 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.294 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:46.294 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:46.294 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:15:46.294 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.294 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:46.294 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:46.294 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:46.294 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.294 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:46.294 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.294 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.294 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.294 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:46.295 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.295 05:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.553 00:15:46.553 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.553 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.553 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.810 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.810 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.810 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.810 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.810 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.810 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.810 { 00:15:46.810 "cntlid": 111, 00:15:46.810 "qid": 0, 00:15:46.810 "state": "enabled", 00:15:46.810 "thread": "nvmf_tgt_poll_group_000", 00:15:46.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:46.810 "listen_address": { 00:15:46.810 "trtype": "TCP", 00:15:46.810 "adrfam": "IPv4", 00:15:46.810 "traddr": "10.0.0.2", 00:15:46.810 "trsvcid": "4420" 00:15:46.810 }, 00:15:46.810 "peer_address": { 00:15:46.810 "trtype": "TCP", 00:15:46.810 "adrfam": "IPv4", 00:15:46.810 "traddr": "10.0.0.1", 00:15:46.810 "trsvcid": "43552" 00:15:46.810 }, 00:15:46.810 "auth": { 00:15:46.810 "state": "completed", 00:15:46.810 "digest": "sha512", 00:15:46.810 "dhgroup": "ffdhe2048" 00:15:46.810 } 00:15:46.810 } 00:15:46.810 ]' 00:15:46.810 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.810 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:46.810 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.810 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:46.810 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.068 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.068 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.068 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.068 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:15:47.068 05:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:15:47.635 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.635 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:47.635 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.635 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.635 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.635 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:47.635 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.635 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:47.635 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:47.894 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:15:47.895 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.895 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:47.895 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:47.895 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:47.895 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.895 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.895 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.895 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.895 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.895 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.895 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.895 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.153 00:15:48.153 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.153 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.153 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.412 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.412 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.412 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.412 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.412 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.412 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.412 { 00:15:48.412 "cntlid": 113, 00:15:48.412 "qid": 0, 00:15:48.412 "state": "enabled", 00:15:48.412 "thread": "nvmf_tgt_poll_group_000", 00:15:48.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:48.412 "listen_address": { 00:15:48.412 "trtype": "TCP", 00:15:48.412 "adrfam": "IPv4", 00:15:48.412 "traddr": "10.0.0.2", 00:15:48.412 "trsvcid": "4420" 00:15:48.412 }, 00:15:48.412 "peer_address": { 00:15:48.412 "trtype": "TCP", 00:15:48.412 "adrfam": "IPv4", 00:15:48.412 "traddr": "10.0.0.1", 00:15:48.412 "trsvcid": "43578" 00:15:48.412 }, 00:15:48.412 "auth": { 00:15:48.412 "state": "completed", 00:15:48.412 "digest": "sha512", 00:15:48.412 "dhgroup": "ffdhe3072" 00:15:48.412 } 00:15:48.412 } 00:15:48.412 ]' 00:15:48.412 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.412 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:48.412 05:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.412 05:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:48.412 05:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.676 05:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.676 05:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.676 05:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.676 05:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:15:48.676 05:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:15:49.241 05:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.241 05:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.241 05:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.241 05:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.241 05:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.241 05:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.241 05:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:49.241 05:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:49.500 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:15:49.500 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.500 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:49.500 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:49.500 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:49.500 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.500 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.500 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.500 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.500 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.500 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.500 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.500 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.759 00:15:49.759 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.759 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.759 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.018 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.018 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.018 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.018 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.018 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.018 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.018 { 00:15:50.018 "cntlid": 115, 00:15:50.018 "qid": 0, 00:15:50.018 "state": "enabled", 00:15:50.018 "thread": "nvmf_tgt_poll_group_000", 00:15:50.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:50.018 "listen_address": { 00:15:50.018 "trtype": "TCP", 00:15:50.018 "adrfam": "IPv4", 00:15:50.018 "traddr": "10.0.0.2", 00:15:50.018 "trsvcid": "4420" 00:15:50.018 }, 00:15:50.018 "peer_address": { 00:15:50.018 "trtype": "TCP", 00:15:50.018 "adrfam": "IPv4", 00:15:50.018 "traddr": "10.0.0.1", 00:15:50.018 "trsvcid": "59852" 00:15:50.018 }, 00:15:50.018 "auth": { 00:15:50.018 "state": "completed", 00:15:50.018 "digest": "sha512", 00:15:50.018 "dhgroup": "ffdhe3072" 00:15:50.018 } 00:15:50.018 } 00:15:50.018 ]' 00:15:50.018 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.018 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:50.018 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.018 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:50.018 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.277 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.277 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.277 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.277 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:15:50.277 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:15:50.845 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.845 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.845 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.845 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.845 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.845 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.845 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:50.845 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:51.105 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:15:51.105 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.105 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:51.105 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:51.105 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:51.105 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.105 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.105 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.105 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.105 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.105 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.105 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.105 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.363 00:15:51.363 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.363 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.363 05:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.621 05:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.621 05:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.621 05:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.621 05:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.621 05:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.621 05:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.621 { 00:15:51.621 "cntlid": 117, 00:15:51.621 "qid": 0, 00:15:51.621 "state": "enabled", 00:15:51.621 "thread": "nvmf_tgt_poll_group_000", 00:15:51.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:51.621 "listen_address": { 00:15:51.621 "trtype": "TCP", 00:15:51.621 "adrfam": "IPv4", 00:15:51.621 "traddr": "10.0.0.2", 00:15:51.621 "trsvcid": "4420" 00:15:51.621 }, 00:15:51.621 "peer_address": { 00:15:51.621 "trtype": "TCP", 00:15:51.621 "adrfam": "IPv4", 00:15:51.621 "traddr": "10.0.0.1", 00:15:51.621 "trsvcid": "59880" 00:15:51.621 }, 00:15:51.621 "auth": { 00:15:51.621 "state": "completed", 00:15:51.621 "digest": "sha512", 00:15:51.621 "dhgroup": "ffdhe3072" 00:15:51.621 } 00:15:51.621 } 00:15:51.621 ]' 00:15:51.621 05:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.621 05:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:51.621 05:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.621 05:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:51.621 05:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.621 05:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.621 05:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.621 05:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.880 05:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:15:51.880 05:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:15:52.446 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.446 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:52.446 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.446 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.446 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.446 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.446 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:52.446 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:52.704 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:15:52.704 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.704 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:52.704 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:52.704 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:52.704 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.704 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:52.704 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.704 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.704 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.704 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:52.704 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:52.704 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:52.962 00:15:52.962 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.962 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.962 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.221 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.221 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.221 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.221 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.221 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.221 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.221 { 00:15:53.221 "cntlid": 119, 00:15:53.221 "qid": 0, 00:15:53.221 "state": "enabled", 00:15:53.221 "thread": "nvmf_tgt_poll_group_000", 00:15:53.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:53.221 "listen_address": { 00:15:53.221 "trtype": "TCP", 00:15:53.221 "adrfam": "IPv4", 00:15:53.221 "traddr": "10.0.0.2", 00:15:53.221 "trsvcid": "4420" 00:15:53.221 }, 00:15:53.221 "peer_address": { 00:15:53.221 "trtype": "TCP", 00:15:53.221 "adrfam": "IPv4", 00:15:53.221 "traddr": "10.0.0.1", 00:15:53.221 "trsvcid": "59890" 00:15:53.221 }, 00:15:53.221 "auth": { 00:15:53.221 "state": "completed", 00:15:53.221 "digest": "sha512", 00:15:53.221 "dhgroup": "ffdhe3072" 00:15:53.221 } 00:15:53.221 } 00:15:53.221 ]' 00:15:53.221 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.221 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.221 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.221 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:53.221 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.221 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.221 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.221 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.479 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:15:53.479 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:15:54.044 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.045 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:54.045 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.045 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.045 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.045 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:54.045 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.045 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:54.045 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:54.302 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:15:54.302 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.302 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:54.302 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:54.302 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:54.302 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.302 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.302 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.302 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.302 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.302 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.302 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.302 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.559 00:15:54.559 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.559 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.560 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.817 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.817 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.817 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.817 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.817 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.818 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.818 { 00:15:54.818 "cntlid": 121, 00:15:54.818 "qid": 0, 00:15:54.818 "state": "enabled", 00:15:54.818 "thread": "nvmf_tgt_poll_group_000", 00:15:54.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:54.818 "listen_address": { 00:15:54.818 "trtype": "TCP", 00:15:54.818 "adrfam": "IPv4", 00:15:54.818 "traddr": "10.0.0.2", 00:15:54.818 "trsvcid": "4420" 00:15:54.818 }, 00:15:54.818 "peer_address": { 00:15:54.818 "trtype": "TCP", 00:15:54.818 "adrfam": "IPv4", 00:15:54.818 "traddr": "10.0.0.1", 00:15:54.818 "trsvcid": "59926" 00:15:54.818 }, 00:15:54.818 "auth": { 00:15:54.818 "state": "completed", 00:15:54.818 "digest": "sha512", 00:15:54.818 "dhgroup": "ffdhe4096" 00:15:54.818 } 00:15:54.818 } 00:15:54.818 ]' 00:15:54.818 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.818 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:54.818 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.818 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:54.818 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.076 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.076 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.076 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.076 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:15:55.076 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:15:55.642 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.900 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:55.900 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.900 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.900 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.900 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.900 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:55.900 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:55.900 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:15:55.900 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.900 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:55.900 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:55.900 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:55.900 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.900 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.900 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.900 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.900 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.900 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.900 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.900 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.163 00:15:56.163 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.163 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.163 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.420 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.420 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.420 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.420 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.420 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.420 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.420 { 00:15:56.420 "cntlid": 123, 00:15:56.420 "qid": 0, 00:15:56.420 "state": "enabled", 00:15:56.420 "thread": "nvmf_tgt_poll_group_000", 00:15:56.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:56.420 "listen_address": { 00:15:56.420 "trtype": "TCP", 00:15:56.420 "adrfam": "IPv4", 00:15:56.420 "traddr": "10.0.0.2", 00:15:56.420 "trsvcid": "4420" 00:15:56.420 }, 00:15:56.420 "peer_address": { 00:15:56.420 "trtype": "TCP", 00:15:56.420 "adrfam": "IPv4", 00:15:56.420 "traddr": "10.0.0.1", 00:15:56.420 "trsvcid": "59952" 00:15:56.420 }, 00:15:56.420 "auth": { 00:15:56.420 "state": "completed", 00:15:56.420 "digest": "sha512", 00:15:56.420 "dhgroup": "ffdhe4096" 00:15:56.420 } 00:15:56.420 } 00:15:56.420 ]' 00:15:56.420 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.420 05:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:56.420 05:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.678 05:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:56.678 05:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.678 05:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.678 05:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.678 05:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.936 05:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:15:56.936 05:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:15:57.502 05:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.502 05:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:57.502 05:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.502 05:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.502 05:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.502 05:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.502 05:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:57.502 05:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:57.502 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:15:57.502 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.502 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:57.502 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:57.502 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:57.502 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.502 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.502 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.502 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.502 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.502 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.502 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.502 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.759 00:15:58.017 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.017 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.017 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.017 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.017 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.017 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.017 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.017 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.017 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.017 { 00:15:58.017 "cntlid": 125, 00:15:58.017 "qid": 0, 00:15:58.017 "state": "enabled", 00:15:58.017 "thread": "nvmf_tgt_poll_group_000", 00:15:58.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:58.017 "listen_address": { 00:15:58.017 "trtype": "TCP", 00:15:58.017 "adrfam": "IPv4", 00:15:58.017 "traddr": "10.0.0.2", 00:15:58.017 "trsvcid": "4420" 00:15:58.017 }, 00:15:58.017 "peer_address": { 00:15:58.017 "trtype": "TCP", 00:15:58.017 "adrfam": "IPv4", 00:15:58.017 "traddr": "10.0.0.1", 00:15:58.017 "trsvcid": "59994" 00:15:58.017 }, 00:15:58.017 "auth": { 00:15:58.017 "state": "completed", 00:15:58.017 "digest": "sha512", 00:15:58.017 "dhgroup": "ffdhe4096" 00:15:58.017 } 00:15:58.017 } 00:15:58.017 ]' 00:15:58.017 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.017 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:58.017 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.275 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:58.275 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.275 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.275 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.275 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.534 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:15:58.534 05:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:15:59.099 05:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.099 05:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.099 05:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.099 05:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.099 05:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.099 05:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.099 05:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:59.099 05:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:59.099 05:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:15:59.099 05:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.099 05:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:59.099 05:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:59.099 05:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:59.099 05:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.099 05:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:59.099 05:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.099 05:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.099 05:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.099 05:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:59.099 05:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:59.099 05:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:59.356 00:15:59.614 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.614 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.614 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.614 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.614 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.614 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.614 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.614 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.614 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.614 { 00:15:59.614 "cntlid": 127, 00:15:59.614 "qid": 0, 00:15:59.614 "state": "enabled", 00:15:59.614 "thread": "nvmf_tgt_poll_group_000", 00:15:59.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:59.614 "listen_address": { 00:15:59.614 "trtype": "TCP", 00:15:59.614 "adrfam": "IPv4", 00:15:59.614 "traddr": "10.0.0.2", 00:15:59.614 "trsvcid": "4420" 00:15:59.614 }, 00:15:59.614 "peer_address": { 00:15:59.614 "trtype": "TCP", 00:15:59.614 "adrfam": "IPv4", 00:15:59.614 "traddr": "10.0.0.1", 00:15:59.614 "trsvcid": "60032" 00:15:59.614 }, 00:15:59.614 "auth": { 00:15:59.614 "state": "completed", 00:15:59.614 "digest": "sha512", 00:15:59.614 "dhgroup": "ffdhe4096" 00:15:59.614 } 00:15:59.614 } 00:15:59.614 ]' 00:15:59.614 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.872 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:59.872 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.872 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:59.872 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.872 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.872 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.872 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.130 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:16:00.130 05:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:16:00.698 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.698 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:00.698 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.698 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.698 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.698 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:00.698 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.698 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:00.698 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:00.698 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:00.698 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.698 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:00.698 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:00.698 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:00.698 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.698 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.698 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.698 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.956 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.956 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.956 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.956 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.214 00:16:01.214 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.214 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.214 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.473 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.473 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.473 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.473 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.473 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.473 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.473 { 00:16:01.473 "cntlid": 129, 00:16:01.473 "qid": 0, 00:16:01.473 "state": "enabled", 00:16:01.473 "thread": "nvmf_tgt_poll_group_000", 00:16:01.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:01.473 "listen_address": { 00:16:01.473 "trtype": "TCP", 00:16:01.473 "adrfam": "IPv4", 00:16:01.473 "traddr": "10.0.0.2", 00:16:01.473 "trsvcid": "4420" 00:16:01.473 }, 00:16:01.473 "peer_address": { 00:16:01.473 "trtype": "TCP", 00:16:01.473 "adrfam": "IPv4", 00:16:01.473 "traddr": "10.0.0.1", 00:16:01.473 "trsvcid": "44086" 00:16:01.473 }, 00:16:01.473 "auth": { 00:16:01.473 "state": "completed", 00:16:01.473 "digest": "sha512", 00:16:01.473 "dhgroup": "ffdhe6144" 00:16:01.473 } 00:16:01.473 } 00:16:01.473 ]' 00:16:01.473 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.473 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:01.473 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.473 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:01.473 05:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.473 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.473 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.473 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.732 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:16:01.732 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:16:02.298 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.299 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:02.299 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.299 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.299 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.299 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.299 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:02.299 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:02.571 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:02.571 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.571 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:02.571 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:02.571 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:02.571 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.571 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.571 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.571 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.571 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.571 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.571 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.571 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.829 00:16:02.829 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.829 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.829 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.087 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.087 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.087 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.087 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.087 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.087 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.087 { 00:16:03.087 "cntlid": 131, 00:16:03.087 "qid": 0, 00:16:03.087 "state": "enabled", 00:16:03.087 "thread": "nvmf_tgt_poll_group_000", 00:16:03.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:03.087 "listen_address": { 00:16:03.087 "trtype": "TCP", 00:16:03.087 "adrfam": "IPv4", 00:16:03.087 "traddr": "10.0.0.2", 00:16:03.087 "trsvcid": "4420" 00:16:03.087 }, 00:16:03.087 "peer_address": { 00:16:03.087 "trtype": "TCP", 00:16:03.087 "adrfam": "IPv4", 00:16:03.087 "traddr": "10.0.0.1", 00:16:03.087 "trsvcid": "44122" 00:16:03.087 }, 00:16:03.087 "auth": { 00:16:03.087 "state": "completed", 00:16:03.087 "digest": "sha512", 00:16:03.087 "dhgroup": "ffdhe6144" 00:16:03.087 } 00:16:03.087 } 00:16:03.087 ]' 00:16:03.087 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.087 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.087 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.087 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:03.087 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.087 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.087 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.087 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.345 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:16:03.345 05:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:16:03.911 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.911 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:03.911 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.911 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.911 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.911 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.911 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:03.911 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:04.173 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:04.173 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.173 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:04.173 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:04.173 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:04.173 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.173 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.173 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.173 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.173 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.173 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.173 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.173 05:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.511 00:16:04.511 05:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.511 05:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.511 05:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.809 05:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.810 05:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.810 05:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.810 05:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.810 05:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.810 05:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.810 { 00:16:04.810 "cntlid": 133, 00:16:04.810 "qid": 0, 00:16:04.810 "state": "enabled", 00:16:04.810 "thread": "nvmf_tgt_poll_group_000", 00:16:04.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:04.810 "listen_address": { 00:16:04.810 "trtype": "TCP", 00:16:04.810 "adrfam": "IPv4", 00:16:04.810 "traddr": "10.0.0.2", 00:16:04.810 "trsvcid": "4420" 00:16:04.810 }, 00:16:04.810 "peer_address": { 00:16:04.810 "trtype": "TCP", 00:16:04.810 "adrfam": "IPv4", 00:16:04.810 "traddr": "10.0.0.1", 00:16:04.810 "trsvcid": "44154" 00:16:04.810 }, 00:16:04.810 "auth": { 00:16:04.810 "state": "completed", 00:16:04.810 "digest": "sha512", 00:16:04.810 "dhgroup": "ffdhe6144" 00:16:04.810 } 00:16:04.810 } 00:16:04.810 ]' 00:16:04.810 05:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.810 05:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:04.810 05:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.810 05:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:04.810 05:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.090 05:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.090 05:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.090 05:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.090 05:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:16:05.090 05:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:16:05.658 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.658 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:05.658 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.658 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.658 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.658 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.658 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:05.658 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:05.917 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:05.918 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.918 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:05.918 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:05.918 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:05.918 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.918 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:05.918 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.918 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.918 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.918 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:05.918 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.918 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.177 00:16:06.177 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.177 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.177 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.436 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.436 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.436 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.436 05:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.436 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.437 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.437 { 00:16:06.437 "cntlid": 135, 00:16:06.437 "qid": 0, 00:16:06.437 "state": "enabled", 00:16:06.437 "thread": "nvmf_tgt_poll_group_000", 00:16:06.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:06.437 "listen_address": { 00:16:06.437 "trtype": "TCP", 00:16:06.437 "adrfam": "IPv4", 00:16:06.437 "traddr": "10.0.0.2", 00:16:06.437 "trsvcid": "4420" 00:16:06.437 }, 00:16:06.437 "peer_address": { 00:16:06.437 "trtype": "TCP", 00:16:06.437 "adrfam": "IPv4", 00:16:06.437 "traddr": "10.0.0.1", 00:16:06.437 "trsvcid": "44162" 00:16:06.437 }, 00:16:06.437 "auth": { 00:16:06.437 "state": "completed", 00:16:06.437 "digest": "sha512", 00:16:06.437 "dhgroup": "ffdhe6144" 00:16:06.437 } 00:16:06.437 } 00:16:06.437 ]' 00:16:06.437 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.437 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.437 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.696 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:06.696 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.696 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.696 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.696 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.696 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:16:06.696 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:16:07.267 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.267 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:07.267 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.267 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.526 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.526 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.526 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.526 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:07.526 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:07.526 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:07.526 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.526 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:07.526 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:07.526 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:07.526 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.526 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.526 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.526 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.526 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.526 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.526 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.526 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.096 00:16:08.096 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.096 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.096 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.355 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.355 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.355 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.355 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.355 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.355 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.355 { 00:16:08.355 "cntlid": 137, 00:16:08.355 "qid": 0, 00:16:08.355 "state": "enabled", 00:16:08.355 "thread": "nvmf_tgt_poll_group_000", 00:16:08.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:08.355 "listen_address": { 00:16:08.355 "trtype": "TCP", 00:16:08.355 "adrfam": "IPv4", 00:16:08.355 "traddr": "10.0.0.2", 00:16:08.355 "trsvcid": "4420" 00:16:08.355 }, 00:16:08.355 "peer_address": { 00:16:08.355 "trtype": "TCP", 00:16:08.355 "adrfam": "IPv4", 00:16:08.355 "traddr": "10.0.0.1", 00:16:08.355 "trsvcid": "44194" 00:16:08.355 }, 00:16:08.355 "auth": { 00:16:08.355 "state": "completed", 00:16:08.355 "digest": "sha512", 00:16:08.355 "dhgroup": "ffdhe8192" 00:16:08.355 } 00:16:08.355 } 00:16:08.355 ]' 00:16:08.355 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.356 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:08.356 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.356 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:08.356 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.356 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.356 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.356 05:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.614 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:16:08.614 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:16:09.181 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.181 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:09.181 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.181 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.181 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.181 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.181 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:09.181 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:09.439 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:09.439 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.439 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:09.439 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:09.439 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:09.439 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.439 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.439 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.439 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.439 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.439 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.439 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.439 05:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.005 00:16:10.005 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.005 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.005 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.005 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.005 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.005 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.005 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.263 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.263 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.263 { 00:16:10.263 "cntlid": 139, 00:16:10.263 "qid": 0, 00:16:10.263 "state": "enabled", 00:16:10.263 "thread": "nvmf_tgt_poll_group_000", 00:16:10.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:10.263 "listen_address": { 00:16:10.263 "trtype": "TCP", 00:16:10.263 "adrfam": "IPv4", 00:16:10.263 "traddr": "10.0.0.2", 00:16:10.263 "trsvcid": "4420" 00:16:10.263 }, 00:16:10.263 "peer_address": { 00:16:10.263 "trtype": "TCP", 00:16:10.263 "adrfam": "IPv4", 00:16:10.263 "traddr": "10.0.0.1", 00:16:10.263 "trsvcid": "44214" 00:16:10.263 }, 00:16:10.263 "auth": { 00:16:10.263 "state": "completed", 00:16:10.263 "digest": "sha512", 00:16:10.263 "dhgroup": "ffdhe8192" 00:16:10.263 } 00:16:10.263 } 00:16:10.263 ]' 00:16:10.263 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.263 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.263 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.263 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:10.263 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.263 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.263 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.263 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.520 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:16:10.520 05:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: --dhchap-ctrl-secret DHHC-1:02:ODRhZDUyMDU1Y2IxNjc1MjUzZDg3ZTg5MDg3ZGVhYjY1ZTYzOTQ3YmNlZjdjZDIyJLQa+w==: 00:16:11.085 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.085 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.085 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.085 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.085 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.085 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.085 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:11.085 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:11.343 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:11.343 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.343 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:11.343 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:11.343 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:11.344 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.344 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.344 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.344 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.344 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.344 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.344 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.344 05:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.602 00:16:11.860 05:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.860 05:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.860 05:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.860 05:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.860 05:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.860 05:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.860 05:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.860 05:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.860 05:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.860 { 00:16:11.860 "cntlid": 141, 00:16:11.860 "qid": 0, 00:16:11.860 "state": "enabled", 00:16:11.860 "thread": "nvmf_tgt_poll_group_000", 00:16:11.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:11.860 "listen_address": { 00:16:11.860 "trtype": "TCP", 00:16:11.860 "adrfam": "IPv4", 00:16:11.860 "traddr": "10.0.0.2", 00:16:11.860 "trsvcid": "4420" 00:16:11.860 }, 00:16:11.860 "peer_address": { 00:16:11.860 "trtype": "TCP", 00:16:11.860 "adrfam": "IPv4", 00:16:11.860 "traddr": "10.0.0.1", 00:16:11.860 "trsvcid": "48016" 00:16:11.860 }, 00:16:11.860 "auth": { 00:16:11.860 "state": "completed", 00:16:11.860 "digest": "sha512", 00:16:11.860 "dhgroup": "ffdhe8192" 00:16:11.860 } 00:16:11.860 } 00:16:11.860 ]' 00:16:11.860 05:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.118 05:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.118 05:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.118 05:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:12.118 05:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.118 05:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.118 05:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.118 05:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.376 05:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:16:12.376 05:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:01:MjAzNDk2ZjY2NjA1YjgxMDg2N2FiMjQyY2FjMGZhYTggCpqd: 00:16:12.943 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.943 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:12.943 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.943 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.943 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.943 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.943 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:12.943 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:13.203 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:13.203 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.203 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:13.203 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:13.203 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:13.203 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.203 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:13.203 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.204 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.204 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.204 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:13.204 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.204 05:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.463 00:16:13.722 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.722 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.722 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.722 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.722 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.722 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.722 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.722 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.722 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.722 { 00:16:13.722 "cntlid": 143, 00:16:13.722 "qid": 0, 00:16:13.722 "state": "enabled", 00:16:13.722 "thread": "nvmf_tgt_poll_group_000", 00:16:13.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:13.722 "listen_address": { 00:16:13.722 "trtype": "TCP", 00:16:13.722 "adrfam": "IPv4", 00:16:13.722 "traddr": "10.0.0.2", 00:16:13.722 "trsvcid": "4420" 00:16:13.722 }, 00:16:13.722 "peer_address": { 00:16:13.722 "trtype": "TCP", 00:16:13.722 "adrfam": "IPv4", 00:16:13.722 "traddr": "10.0.0.1", 00:16:13.722 "trsvcid": "48058" 00:16:13.722 }, 00:16:13.722 "auth": { 00:16:13.722 "state": "completed", 00:16:13.722 "digest": "sha512", 00:16:13.722 "dhgroup": "ffdhe8192" 00:16:13.722 } 00:16:13.722 } 00:16:13.722 ]' 00:16:13.722 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.980 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.980 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.981 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:13.981 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.981 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.981 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.981 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.239 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:16:14.239 05:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.808 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.378 00:16:15.378 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.378 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.378 05:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.660 05:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.660 05:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.660 05:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.660 05:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.660 05:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.660 05:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.660 { 00:16:15.660 "cntlid": 145, 00:16:15.660 "qid": 0, 00:16:15.660 "state": "enabled", 00:16:15.660 "thread": "nvmf_tgt_poll_group_000", 00:16:15.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:15.660 "listen_address": { 00:16:15.660 "trtype": "TCP", 00:16:15.660 "adrfam": "IPv4", 00:16:15.660 "traddr": "10.0.0.2", 00:16:15.660 "trsvcid": "4420" 00:16:15.660 }, 00:16:15.660 "peer_address": { 00:16:15.660 "trtype": "TCP", 00:16:15.660 "adrfam": "IPv4", 00:16:15.660 "traddr": "10.0.0.1", 00:16:15.660 "trsvcid": "48096" 00:16:15.660 }, 00:16:15.660 "auth": { 00:16:15.660 "state": "completed", 00:16:15.660 "digest": "sha512", 00:16:15.660 "dhgroup": "ffdhe8192" 00:16:15.660 } 00:16:15.660 } 00:16:15.660 ]' 00:16:15.660 05:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.660 05:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.660 05:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.660 05:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:15.660 05:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.660 05:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.660 05:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.660 05:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.920 05:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:16:15.920 05:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MmI5MGIxZWViNzNlYmMwZGEwZDc2ZDY5ZjcwMDQ1ZTQ5ZjRiMTQ3YWFhMzg3Yzc5thgI9w==: --dhchap-ctrl-secret DHHC-1:03:MTAwNTdhMjU4NjU2OWJmOTVjMzllMTRmMjM4NjUzNTQyOGI0NTljMmEwZGIwMTRmOGU4NjMwMDYwODcyMDE4Y34F9NA=: 00:16:16.488 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.488 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.488 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.488 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.488 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.488 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:16.488 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.488 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.488 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.488 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:16.488 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:16.488 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:16.488 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:16.488 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.488 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:16.488 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.488 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:16.488 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:16.488 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:17.057 request: 00:16:17.057 { 00:16:17.057 "name": "nvme0", 00:16:17.057 "trtype": "tcp", 00:16:17.057 "traddr": "10.0.0.2", 00:16:17.057 "adrfam": "ipv4", 00:16:17.057 "trsvcid": "4420", 00:16:17.057 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:17.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:17.057 "prchk_reftag": false, 00:16:17.057 "prchk_guard": false, 00:16:17.057 "hdgst": false, 00:16:17.057 "ddgst": false, 00:16:17.057 "dhchap_key": "key2", 00:16:17.057 "allow_unrecognized_csi": false, 00:16:17.057 "method": "bdev_nvme_attach_controller", 00:16:17.057 "req_id": 1 00:16:17.057 } 00:16:17.057 Got JSON-RPC error response 00:16:17.057 response: 00:16:17.057 { 00:16:17.057 "code": -5, 00:16:17.057 "message": "Input/output error" 00:16:17.057 } 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:17.057 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:17.628 request: 00:16:17.628 { 00:16:17.628 "name": "nvme0", 00:16:17.628 "trtype": "tcp", 00:16:17.628 "traddr": "10.0.0.2", 00:16:17.628 "adrfam": "ipv4", 00:16:17.628 "trsvcid": "4420", 00:16:17.628 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:17.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:17.628 "prchk_reftag": false, 00:16:17.628 "prchk_guard": false, 00:16:17.628 "hdgst": false, 00:16:17.628 "ddgst": false, 00:16:17.628 "dhchap_key": "key1", 00:16:17.628 "dhchap_ctrlr_key": "ckey2", 00:16:17.628 "allow_unrecognized_csi": false, 00:16:17.628 "method": "bdev_nvme_attach_controller", 00:16:17.628 "req_id": 1 00:16:17.628 } 00:16:17.628 Got JSON-RPC error response 00:16:17.628 response: 00:16:17.628 { 00:16:17.628 "code": -5, 00:16:17.628 "message": "Input/output error" 00:16:17.628 } 00:16:17.628 05:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:17.628 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:17.628 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:17.628 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:17.628 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.628 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.628 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.628 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.628 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:17.628 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.628 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.628 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.628 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.628 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:17.628 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.628 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:17.628 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.628 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:17.628 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.628 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.628 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.628 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.888 request: 00:16:17.888 { 00:16:17.888 "name": "nvme0", 00:16:17.888 "trtype": "tcp", 00:16:17.888 "traddr": "10.0.0.2", 00:16:17.888 "adrfam": "ipv4", 00:16:17.888 "trsvcid": "4420", 00:16:17.888 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:17.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:17.888 "prchk_reftag": false, 00:16:17.888 "prchk_guard": false, 00:16:17.888 "hdgst": false, 00:16:17.888 "ddgst": false, 00:16:17.888 "dhchap_key": "key1", 00:16:17.888 "dhchap_ctrlr_key": "ckey1", 00:16:17.888 "allow_unrecognized_csi": false, 00:16:17.888 "method": "bdev_nvme_attach_controller", 00:16:17.888 "req_id": 1 00:16:17.888 } 00:16:17.888 Got JSON-RPC error response 00:16:17.888 response: 00:16:17.888 { 00:16:17.888 "code": -5, 00:16:17.888 "message": "Input/output error" 00:16:17.888 } 00:16:17.888 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:17.888 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:17.888 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:17.888 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:17.888 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.888 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.888 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.888 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.888 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3559149 00:16:17.888 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3559149 ']' 00:16:17.888 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3559149 00:16:17.888 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:17.888 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:17.888 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3559149 00:16:18.147 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:18.147 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:18.147 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3559149' 00:16:18.148 killing process with pid 3559149 00:16:18.148 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3559149 00:16:18.148 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3559149 00:16:18.148 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:18.148 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:18.148 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:18.148 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.148 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3581391 00:16:18.148 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:18.148 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3581391 00:16:18.148 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3581391 ']' 00:16:18.148 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.148 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.148 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.148 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.148 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.407 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.407 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:18.407 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:18.407 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:18.407 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.407 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.407 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:18.407 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3581391 00:16:18.407 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3581391 ']' 00:16:18.407 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.407 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.407 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.407 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.407 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.664 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.664 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:18.664 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:18.664 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.664 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.664 null0 00:16:18.664 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.664 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:18.664 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.peW 00:16:18.664 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.664 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.H9N ]] 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.H9N 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.CIy 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.fM0 ]] 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fM0 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.DtF 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.qJ6 ]] 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qJ6 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.922 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:18.923 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Udt 00:16:18.923 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.923 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.923 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.923 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:18.923 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:18.923 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.923 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:18.923 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:18.923 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:18.923 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.923 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:18.923 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.923 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.923 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.923 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:18.923 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.923 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.488 nvme0n1 00:16:19.746 05:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.746 05:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.747 05:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.747 05:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.747 05:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.747 05:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.747 05:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.747 05:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.747 05:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.747 { 00:16:19.747 "cntlid": 1, 00:16:19.747 "qid": 0, 00:16:19.747 "state": "enabled", 00:16:19.747 "thread": "nvmf_tgt_poll_group_000", 00:16:19.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:19.747 "listen_address": { 00:16:19.747 "trtype": "TCP", 00:16:19.747 "adrfam": "IPv4", 00:16:19.747 "traddr": "10.0.0.2", 00:16:19.747 "trsvcid": "4420" 00:16:19.747 }, 00:16:19.747 "peer_address": { 00:16:19.747 "trtype": "TCP", 00:16:19.747 "adrfam": "IPv4", 00:16:19.747 "traddr": "10.0.0.1", 00:16:19.747 "trsvcid": "48144" 00:16:19.747 }, 00:16:19.747 "auth": { 00:16:19.747 "state": "completed", 00:16:19.747 "digest": "sha512", 00:16:19.747 "dhgroup": "ffdhe8192" 00:16:19.747 } 00:16:19.747 } 00:16:19.747 ]' 00:16:19.747 05:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.747 05:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.747 05:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.004 05:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:20.004 05:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.004 05:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.004 05:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.004 05:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.262 05:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:16:20.262 05:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:16:20.828 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.829 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.829 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.829 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.829 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.829 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:20.829 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.829 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.829 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.829 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:20.829 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:21.087 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:21.087 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:21.087 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:21.087 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:21.087 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.087 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:21.087 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.087 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:21.087 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.087 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.087 request: 00:16:21.087 { 00:16:21.087 "name": "nvme0", 00:16:21.087 "trtype": "tcp", 00:16:21.087 "traddr": "10.0.0.2", 00:16:21.087 "adrfam": "ipv4", 00:16:21.087 "trsvcid": "4420", 00:16:21.087 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:21.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:21.087 "prchk_reftag": false, 00:16:21.087 "prchk_guard": false, 00:16:21.087 "hdgst": false, 00:16:21.087 "ddgst": false, 00:16:21.087 "dhchap_key": "key3", 00:16:21.087 "allow_unrecognized_csi": false, 00:16:21.087 "method": "bdev_nvme_attach_controller", 00:16:21.087 "req_id": 1 00:16:21.087 } 00:16:21.087 Got JSON-RPC error response 00:16:21.087 response: 00:16:21.087 { 00:16:21.087 "code": -5, 00:16:21.087 "message": "Input/output error" 00:16:21.087 } 00:16:21.087 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:21.087 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:21.087 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:21.087 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:21.087 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:21.087 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:21.087 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:21.087 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:21.345 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:21.345 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:21.345 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:21.345 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:21.345 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.345 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:21.345 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.345 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:21.345 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.345 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.603 request: 00:16:21.603 { 00:16:21.603 "name": "nvme0", 00:16:21.603 "trtype": "tcp", 00:16:21.603 "traddr": "10.0.0.2", 00:16:21.603 "adrfam": "ipv4", 00:16:21.603 "trsvcid": "4420", 00:16:21.603 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:21.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:21.603 "prchk_reftag": false, 00:16:21.603 "prchk_guard": false, 00:16:21.603 "hdgst": false, 00:16:21.603 "ddgst": false, 00:16:21.603 "dhchap_key": "key3", 00:16:21.603 "allow_unrecognized_csi": false, 00:16:21.603 "method": "bdev_nvme_attach_controller", 00:16:21.603 "req_id": 1 00:16:21.603 } 00:16:21.603 Got JSON-RPC error response 00:16:21.603 response: 00:16:21.603 { 00:16:21.603 "code": -5, 00:16:21.603 "message": "Input/output error" 00:16:21.603 } 00:16:21.603 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:21.603 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:21.603 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:21.603 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:21.603 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:21.603 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:21.603 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:21.603 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:21.603 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:21.603 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:21.861 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.861 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.861 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.861 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.861 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.861 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.861 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.861 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.861 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:21.861 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:21.861 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:21.861 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:21.861 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.861 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:21.861 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.861 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:21.861 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:21.861 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:22.119 request: 00:16:22.119 { 00:16:22.119 "name": "nvme0", 00:16:22.119 "trtype": "tcp", 00:16:22.119 "traddr": "10.0.0.2", 00:16:22.119 "adrfam": "ipv4", 00:16:22.119 "trsvcid": "4420", 00:16:22.119 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:22.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:22.119 "prchk_reftag": false, 00:16:22.119 "prchk_guard": false, 00:16:22.119 "hdgst": false, 00:16:22.119 "ddgst": false, 00:16:22.119 "dhchap_key": "key0", 00:16:22.119 "dhchap_ctrlr_key": "key1", 00:16:22.119 "allow_unrecognized_csi": false, 00:16:22.119 "method": "bdev_nvme_attach_controller", 00:16:22.119 "req_id": 1 00:16:22.119 } 00:16:22.119 Got JSON-RPC error response 00:16:22.119 response: 00:16:22.119 { 00:16:22.119 "code": -5, 00:16:22.119 "message": "Input/output error" 00:16:22.119 } 00:16:22.119 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:22.119 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:22.119 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:22.119 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:22.119 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:22.119 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:22.119 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:22.377 nvme0n1 00:16:22.377 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:22.377 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.377 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:22.636 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.636 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.636 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.895 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:22.895 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.895 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.895 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.895 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:22.895 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:22.895 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:23.462 nvme0n1 00:16:23.462 05:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:23.462 05:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:23.463 05:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.721 05:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.721 05:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:23.721 05:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.721 05:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.721 05:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.721 05:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:23.721 05:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:23.721 05:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.980 05:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.981 05:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:16:23.981 05:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: --dhchap-ctrl-secret DHHC-1:03:ODBlMGVjZmE5MzY0Yzc3Y2E5MzZhMWE2NTU5YzliYmFmYzljMzYwYTM5ODdjYTc4MjZlNDEzYTc2MDQyZjk4MJXpkYs=: 00:16:24.550 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:16:24.550 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:16:24.550 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:16:24.550 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:16:24.550 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:16:24.550 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:16:24.550 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:16:24.550 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.550 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.809 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:16:24.809 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:24.809 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:16:24.809 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:24.809 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:24.809 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:24.809 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:24.809 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:24.809 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:24.809 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:25.069 request: 00:16:25.069 { 00:16:25.069 "name": "nvme0", 00:16:25.069 "trtype": "tcp", 00:16:25.069 "traddr": "10.0.0.2", 00:16:25.069 "adrfam": "ipv4", 00:16:25.069 "trsvcid": "4420", 00:16:25.069 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:25.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:25.069 "prchk_reftag": false, 00:16:25.069 "prchk_guard": false, 00:16:25.069 "hdgst": false, 00:16:25.069 "ddgst": false, 00:16:25.069 "dhchap_key": "key1", 00:16:25.069 "allow_unrecognized_csi": false, 00:16:25.069 "method": "bdev_nvme_attach_controller", 00:16:25.069 "req_id": 1 00:16:25.069 } 00:16:25.069 Got JSON-RPC error response 00:16:25.069 response: 00:16:25.069 { 00:16:25.069 "code": -5, 00:16:25.069 "message": "Input/output error" 00:16:25.069 } 00:16:25.069 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:25.069 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:25.069 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:25.069 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:25.069 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:25.329 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:25.329 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:25.895 nvme0n1 00:16:25.895 05:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:16:25.895 05:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:16:25.895 05:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.154 05:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.154 05:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.154 05:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.414 05:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.414 05:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.414 05:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.414 05:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.414 05:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:16:26.414 05:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:26.414 05:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:26.674 nvme0n1 00:16:26.674 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:16:26.674 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:16:26.674 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.674 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.674 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.674 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.934 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:26.934 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.934 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.934 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.934 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: '' 2s 00:16:26.934 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:26.934 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:26.934 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: 00:16:26.934 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:16:26.934 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:26.934 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:26.934 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: ]] 00:16:26.934 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZmNmMDFhMjQ4YTkxMGI3ZDViOTA4MDM1ZDk4NzlmMGT5ogVX: 00:16:26.934 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:16:26.934 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:26.934 05:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: 2s 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: ]] 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Y2U5ZWIzNzEwZTUwMWJjOWVhMzc5NTU2MjUwOGMzNjc2ZGQzZjUwNTVkMjQwOWE0/Q7k/A==: 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:29.470 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:31.368 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:16:31.368 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:31.368 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:31.368 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:31.368 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:31.368 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:31.368 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:31.368 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.369 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:31.369 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.369 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.369 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.369 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:31.369 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:31.369 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:31.933 nvme0n1 00:16:31.933 05:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:31.933 05:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.933 05:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.933 05:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.933 05:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:31.933 05:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:32.499 05:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:16:32.499 05:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:16:32.499 05:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.499 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.499 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.499 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.499 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.499 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.499 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:16:32.499 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:16:32.757 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:16:32.757 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:16:32.757 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.016 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.016 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:33.016 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.016 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.016 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.016 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:33.016 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:33.016 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:33.016 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:33.016 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.016 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:33.016 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.016 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:33.016 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:33.582 request: 00:16:33.582 { 00:16:33.582 "name": "nvme0", 00:16:33.582 "dhchap_key": "key1", 00:16:33.582 "dhchap_ctrlr_key": "key3", 00:16:33.582 "method": "bdev_nvme_set_keys", 00:16:33.582 "req_id": 1 00:16:33.582 } 00:16:33.582 Got JSON-RPC error response 00:16:33.582 response: 00:16:33.582 { 00:16:33.582 "code": -13, 00:16:33.582 "message": "Permission denied" 00:16:33.582 } 00:16:33.582 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:33.582 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:33.582 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:33.582 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:33.582 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:33.582 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:33.582 05:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.582 05:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:16:33.582 05:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:16:34.961 05:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:34.961 05:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:34.961 05:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.961 05:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:16:34.961 05:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:34.961 05:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.961 05:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.961 05:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.962 05:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:34.962 05:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:34.962 05:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:35.531 nvme0n1 00:16:35.531 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:35.531 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.531 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.531 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.531 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:35.531 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:35.531 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:35.531 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:35.531 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.531 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:35.531 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.531 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:35.531 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:36.101 request: 00:16:36.101 { 00:16:36.101 "name": "nvme0", 00:16:36.101 "dhchap_key": "key2", 00:16:36.101 "dhchap_ctrlr_key": "key0", 00:16:36.101 "method": "bdev_nvme_set_keys", 00:16:36.101 "req_id": 1 00:16:36.101 } 00:16:36.101 Got JSON-RPC error response 00:16:36.101 response: 00:16:36.101 { 00:16:36.101 "code": -13, 00:16:36.101 "message": "Permission denied" 00:16:36.101 } 00:16:36.101 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:36.101 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:36.101 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:36.101 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:36.101 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:36.101 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:36.101 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.360 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:16:36.360 05:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:16:37.298 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:37.298 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:37.298 05:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.558 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:16:37.558 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:16:37.558 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:16:37.558 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3559169 00:16:37.558 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3559169 ']' 00:16:37.558 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3559169 00:16:37.558 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:37.558 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.558 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3559169 00:16:37.558 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:37.558 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:37.558 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3559169' 00:16:37.558 killing process with pid 3559169 00:16:37.558 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3559169 00:16:37.558 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3559169 00:16:37.818 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:37.818 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:37.818 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:16:37.818 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:37.818 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:16:37.818 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:37.818 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:37.818 rmmod nvme_tcp 00:16:37.818 rmmod nvme_fabrics 00:16:37.818 rmmod nvme_keyring 00:16:38.078 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:38.078 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:16:38.078 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:16:38.078 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3581391 ']' 00:16:38.078 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3581391 00:16:38.078 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3581391 ']' 00:16:38.078 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3581391 00:16:38.078 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:38.078 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.078 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3581391 00:16:38.078 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:38.078 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:38.078 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3581391' 00:16:38.078 killing process with pid 3581391 00:16:38.078 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3581391 00:16:38.078 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3581391 00:16:38.338 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:38.338 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:38.338 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:38.338 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:16:38.338 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:16:38.338 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:38.338 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:16:38.338 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:38.338 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:38.338 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.338 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.338 05:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.255 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:40.255 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.peW /tmp/spdk.key-sha256.CIy /tmp/spdk.key-sha384.DtF /tmp/spdk.key-sha512.Udt /tmp/spdk.key-sha512.H9N /tmp/spdk.key-sha384.fM0 /tmp/spdk.key-sha256.qJ6 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:16:40.255 00:16:40.255 real 2m32.520s 00:16:40.255 user 5m51.944s 00:16:40.255 sys 0m23.849s 00:16:40.255 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.255 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.255 ************************************ 00:16:40.255 END TEST nvmf_auth_target 00:16:40.255 ************************************ 00:16:40.255 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:16:40.255 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:40.255 05:11:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:40.255 05:11:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.255 05:11:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:40.255 ************************************ 00:16:40.255 START TEST nvmf_bdevio_no_huge 00:16:40.255 ************************************ 00:16:40.255 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:40.514 * Looking for test storage... 00:16:40.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:40.514 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:40.514 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:16:40.514 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:40.514 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:40.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.514 --rc genhtml_branch_coverage=1 00:16:40.515 --rc genhtml_function_coverage=1 00:16:40.515 --rc genhtml_legend=1 00:16:40.515 --rc geninfo_all_blocks=1 00:16:40.515 --rc geninfo_unexecuted_blocks=1 00:16:40.515 00:16:40.515 ' 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:40.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.515 --rc genhtml_branch_coverage=1 00:16:40.515 --rc genhtml_function_coverage=1 00:16:40.515 --rc genhtml_legend=1 00:16:40.515 --rc geninfo_all_blocks=1 00:16:40.515 --rc geninfo_unexecuted_blocks=1 00:16:40.515 00:16:40.515 ' 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:40.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.515 --rc genhtml_branch_coverage=1 00:16:40.515 --rc genhtml_function_coverage=1 00:16:40.515 --rc genhtml_legend=1 00:16:40.515 --rc geninfo_all_blocks=1 00:16:40.515 --rc geninfo_unexecuted_blocks=1 00:16:40.515 00:16:40.515 ' 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:40.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.515 --rc genhtml_branch_coverage=1 00:16:40.515 --rc genhtml_function_coverage=1 00:16:40.515 --rc genhtml_legend=1 00:16:40.515 --rc geninfo_all_blocks=1 00:16:40.515 --rc geninfo_unexecuted_blocks=1 00:16:40.515 00:16:40.515 ' 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:40.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:16:40.515 05:11:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:45.790 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:45.790 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:45.790 Found net devices under 0000:86:00.0: cvl_0_0 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:45.790 Found net devices under 0000:86:00.1: cvl_0_1 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:45.790 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:45.791 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:46.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:16:46.050 00:16:46.050 --- 10.0.0.2 ping statistics --- 00:16:46.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.050 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:46.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:16:46.050 00:16:46.050 --- 10.0.0.1 ping statistics --- 00:16:46.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.050 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3588268 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3588268 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3588268 ']' 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.050 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:46.050 [2024-12-09 05:11:22.545700] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:16:46.050 [2024-12-09 05:11:22.545750] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:46.050 [2024-12-09 05:11:22.620547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:46.050 [2024-12-09 05:11:22.668035] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.050 [2024-12-09 05:11:22.668068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.050 [2024-12-09 05:11:22.668075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.050 [2024-12-09 05:11:22.668081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.050 [2024-12-09 05:11:22.668086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.050 [2024-12-09 05:11:22.669265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:46.050 [2024-12-09 05:11:22.669374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:16:46.050 [2024-12-09 05:11:22.669481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.050 [2024-12-09 05:11:22.669482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:46.309 [2024-12-09 05:11:22.814266] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:46.309 Malloc0 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:46.309 [2024-12-09 05:11:22.850517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:46.309 { 00:16:46.309 "params": { 00:16:46.309 "name": "Nvme$subsystem", 00:16:46.309 "trtype": "$TEST_TRANSPORT", 00:16:46.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:46.309 "adrfam": "ipv4", 00:16:46.309 "trsvcid": "$NVMF_PORT", 00:16:46.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:46.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:46.309 "hdgst": ${hdgst:-false}, 00:16:46.309 "ddgst": ${ddgst:-false} 00:16:46.309 }, 00:16:46.309 "method": "bdev_nvme_attach_controller" 00:16:46.309 } 00:16:46.309 EOF 00:16:46.309 )") 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:16:46.309 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:16:46.309 "params": { 00:16:46.309 "name": "Nvme1", 00:16:46.309 "trtype": "tcp", 00:16:46.309 "traddr": "10.0.0.2", 00:16:46.309 "adrfam": "ipv4", 00:16:46.309 "trsvcid": "4420", 00:16:46.309 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:46.309 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:46.309 "hdgst": false, 00:16:46.309 "ddgst": false 00:16:46.309 }, 00:16:46.309 "method": "bdev_nvme_attach_controller" 00:16:46.309 }' 00:16:46.309 [2024-12-09 05:11:22.902723] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:16:46.309 [2024-12-09 05:11:22.902766] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3588292 ] 00:16:46.568 [2024-12-09 05:11:22.970492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:46.568 [2024-12-09 05:11:23.019747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.568 [2024-12-09 05:11:23.019843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.568 [2024-12-09 05:11:23.019843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.826 I/O targets: 00:16:46.826 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:46.826 00:16:46.826 00:16:46.826 CUnit - A unit testing framework for C - Version 2.1-3 00:16:46.826 http://cunit.sourceforge.net/ 00:16:46.826 00:16:46.826 00:16:46.826 Suite: bdevio tests on: Nvme1n1 00:16:46.826 Test: blockdev write read block ...passed 00:16:46.826 Test: blockdev write zeroes read block ...passed 00:16:46.826 Test: blockdev write zeroes read no split ...passed 00:16:46.826 Test: blockdev write zeroes read split ...passed 00:16:46.826 Test: blockdev write zeroes read split partial ...passed 00:16:46.826 Test: blockdev reset ...[2024-12-09 05:11:23.428614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:16:46.826 [2024-12-09 05:11:23.428680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd8e0 (9): Bad file descriptor 00:16:47.085 [2024-12-09 05:11:23.486808] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:16:47.085 passed 00:16:47.085 Test: blockdev write read 8 blocks ...passed 00:16:47.085 Test: blockdev write read size > 128k ...passed 00:16:47.085 Test: blockdev write read invalid size ...passed 00:16:47.085 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:47.085 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:47.085 Test: blockdev write read max offset ...passed 00:16:47.085 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:47.085 Test: blockdev writev readv 8 blocks ...passed 00:16:47.085 Test: blockdev writev readv 30 x 1block ...passed 00:16:47.085 Test: blockdev writev readv block ...passed 00:16:47.085 Test: blockdev writev readv size > 128k ...passed 00:16:47.085 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:47.085 Test: blockdev comparev and writev ...[2024-12-09 05:11:23.698018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.085 [2024-12-09 05:11:23.698048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.085 [2024-12-09 05:11:23.698062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.085 [2024-12-09 05:11:23.698071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:47.085 [2024-12-09 05:11:23.698320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.085 [2024-12-09 05:11:23.698331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:47.085 [2024-12-09 05:11:23.698343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.085 [2024-12-09 05:11:23.698351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:47.085 [2024-12-09 05:11:23.698605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.085 [2024-12-09 05:11:23.698622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:47.085 [2024-12-09 05:11:23.698636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.085 [2024-12-09 05:11:23.698644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:47.085 [2024-12-09 05:11:23.698900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.085 [2024-12-09 05:11:23.698909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:47.085 [2024-12-09 05:11:23.698921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.085 [2024-12-09 05:11:23.698928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:47.368 passed 00:16:47.368 Test: blockdev nvme passthru rw ...passed 00:16:47.368 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:11:23.781479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:47.368 [2024-12-09 05:11:23.781497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:47.368 [2024-12-09 05:11:23.781623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:47.368 [2024-12-09 05:11:23.781633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:47.368 [2024-12-09 05:11:23.781753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:47.368 [2024-12-09 05:11:23.781763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:47.368 [2024-12-09 05:11:23.781879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:47.368 [2024-12-09 05:11:23.781888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:47.368 passed 00:16:47.368 Test: blockdev nvme admin passthru ...passed 00:16:47.368 Test: blockdev copy ...passed 00:16:47.368 00:16:47.368 Run Summary: Type Total Ran Passed Failed Inactive 00:16:47.368 suites 1 1 n/a 0 0 00:16:47.368 tests 23 23 23 0 0 00:16:47.368 asserts 152 152 152 0 n/a 00:16:47.368 00:16:47.368 Elapsed time = 1.148 seconds 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:47.700 rmmod nvme_tcp 00:16:47.700 rmmod nvme_fabrics 00:16:47.700 rmmod nvme_keyring 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3588268 ']' 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3588268 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3588268 ']' 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3588268 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3588268 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3588268' 00:16:47.700 killing process with pid 3588268 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3588268 00:16:47.700 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3588268 00:16:48.045 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:48.045 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:48.045 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:48.045 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:16:48.045 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:16:48.045 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:16:48.045 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:48.045 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:48.045 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:48.045 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.045 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.045 05:11:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:50.617 00:16:50.617 real 0m9.768s 00:16:50.617 user 0m11.168s 00:16:50.617 sys 0m4.927s 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:50.617 ************************************ 00:16:50.617 END TEST nvmf_bdevio_no_huge 00:16:50.617 ************************************ 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:50.617 ************************************ 00:16:50.617 START TEST nvmf_tls 00:16:50.617 ************************************ 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:50.617 * Looking for test storage... 00:16:50.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:50.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.617 --rc genhtml_branch_coverage=1 00:16:50.617 --rc genhtml_function_coverage=1 00:16:50.617 --rc genhtml_legend=1 00:16:50.617 --rc geninfo_all_blocks=1 00:16:50.617 --rc geninfo_unexecuted_blocks=1 00:16:50.617 00:16:50.617 ' 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:50.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.617 --rc genhtml_branch_coverage=1 00:16:50.617 --rc genhtml_function_coverage=1 00:16:50.617 --rc genhtml_legend=1 00:16:50.617 --rc geninfo_all_blocks=1 00:16:50.617 --rc geninfo_unexecuted_blocks=1 00:16:50.617 00:16:50.617 ' 00:16:50.617 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:50.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.617 --rc genhtml_branch_coverage=1 00:16:50.618 --rc genhtml_function_coverage=1 00:16:50.618 --rc genhtml_legend=1 00:16:50.618 --rc geninfo_all_blocks=1 00:16:50.618 --rc geninfo_unexecuted_blocks=1 00:16:50.618 00:16:50.618 ' 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:50.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.618 --rc genhtml_branch_coverage=1 00:16:50.618 --rc genhtml_function_coverage=1 00:16:50.618 --rc genhtml_legend=1 00:16:50.618 --rc geninfo_all_blocks=1 00:16:50.618 --rc geninfo_unexecuted_blocks=1 00:16:50.618 00:16:50.618 ' 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:50.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:16:50.618 05:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:55.892 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:55.892 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:16:55.892 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:55.892 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:55.892 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:55.892 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:55.892 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:55.892 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:16:55.892 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:55.892 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:16:55.892 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:16:55.892 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:16:55.892 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:16:55.892 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:16:55.892 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:16:55.892 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:55.892 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:55.893 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:55.893 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:55.893 Found net devices under 0000:86:00.0: cvl_0_0 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:55.893 Found net devices under 0000:86:00.1: cvl_0_1 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:55.893 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:56.151 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:56.151 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:56.151 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:56.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:16:56.152 00:16:56.152 --- 10.0.0.2 ping statistics --- 00:16:56.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.152 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:56.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:16:56.152 00:16:56.152 --- 10.0.0.1 ping statistics --- 00:16:56.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.152 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3592064 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3592064 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3592064 ']' 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.152 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:56.152 [2024-12-09 05:11:32.731100] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:16:56.152 [2024-12-09 05:11:32.731147] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.411 [2024-12-09 05:11:32.800349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.411 [2024-12-09 05:11:32.839457] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.411 [2024-12-09 05:11:32.839492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.411 [2024-12-09 05:11:32.839499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.411 [2024-12-09 05:11:32.839506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.411 [2024-12-09 05:11:32.839511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.411 [2024-12-09 05:11:32.840099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.411 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.411 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:56.411 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:56.411 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:56.411 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:56.411 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.411 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:16:56.411 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:56.669 true 00:16:56.669 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:16:56.669 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:56.669 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:16:56.669 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:16:56.670 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:56.928 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:56.928 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:16:57.186 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:16:57.186 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:16:57.186 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:57.444 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:57.444 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:16:57.444 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:16:57.444 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:16:57.444 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:57.444 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:16:57.702 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:16:57.702 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:16:57.702 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:57.960 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:57.960 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:58.218 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:16:58.218 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:16:58.218 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:58.218 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:58.218 05:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.HIDeWI0IJm 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:16:58.476 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.AVdmcJmk7w 00:16:58.734 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:58.734 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:58.734 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.HIDeWI0IJm 00:16:58.734 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.AVdmcJmk7w 00:16:58.734 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:58.734 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:16:58.992 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.HIDeWI0IJm 00:16:58.992 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.HIDeWI0IJm 00:16:58.992 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:59.300 [2024-12-09 05:11:35.746727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.300 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:59.557 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:59.557 [2024-12-09 05:11:36.127689] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:59.557 [2024-12-09 05:11:36.127934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.557 05:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:59.814 malloc0 00:16:59.814 05:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:00.072 05:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.HIDeWI0IJm 00:17:00.072 05:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:00.330 05:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.HIDeWI0IJm 00:17:12.536 Initializing NVMe Controllers 00:17:12.536 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:12.536 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:12.536 Initialization complete. Launching workers. 00:17:12.536 ======================================================== 00:17:12.536 Latency(us) 00:17:12.536 Device Information : IOPS MiB/s Average min max 00:17:12.536 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16322.49 63.76 3921.08 835.31 5542.17 00:17:12.536 ======================================================== 00:17:12.536 Total : 16322.49 63.76 3921.08 835.31 5542.17 00:17:12.536 00:17:12.536 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HIDeWI0IJm 00:17:12.536 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:12.536 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:12.536 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:12.536 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.HIDeWI0IJm 00:17:12.536 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:12.536 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3594533 00:17:12.536 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:12.536 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:12.536 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3594533 /var/tmp/bdevperf.sock 00:17:12.536 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3594533 ']' 00:17:12.536 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:12.536 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.536 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:12.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:12.536 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.536 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:12.536 [2024-12-09 05:11:47.076223] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:17:12.536 [2024-12-09 05:11:47.076286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3594533 ] 00:17:12.536 [2024-12-09 05:11:47.138226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.536 [2024-12-09 05:11:47.178691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.536 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.536 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:12.536 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.HIDeWI0IJm 00:17:12.536 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:12.536 [2024-12-09 05:11:47.618691] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:12.536 TLSTESTn1 00:17:12.536 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:12.536 Running I/O for 10 seconds... 00:17:13.471 5431.00 IOPS, 21.21 MiB/s [2024-12-09T04:11:51.053Z] 5529.00 IOPS, 21.60 MiB/s [2024-12-09T04:11:51.990Z] 5520.67 IOPS, 21.57 MiB/s [2024-12-09T04:11:52.926Z] 5570.00 IOPS, 21.76 MiB/s [2024-12-09T04:11:53.860Z] 5596.60 IOPS, 21.86 MiB/s [2024-12-09T04:11:55.255Z] 5611.50 IOPS, 21.92 MiB/s [2024-12-09T04:11:55.822Z] 5621.71 IOPS, 21.96 MiB/s [2024-12-09T04:11:57.198Z] 5634.88 IOPS, 22.01 MiB/s [2024-12-09T04:11:58.135Z] 5598.33 IOPS, 21.87 MiB/s [2024-12-09T04:11:58.135Z] 5608.70 IOPS, 21.91 MiB/s 00:17:21.489 Latency(us) 00:17:21.489 [2024-12-09T04:11:58.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.489 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:21.489 Verification LBA range: start 0x0 length 0x2000 00:17:21.489 TLSTESTn1 : 10.02 5611.93 21.92 0.00 0.00 22771.09 4758.48 33736.79 00:17:21.489 [2024-12-09T04:11:58.135Z] =================================================================================================================== 00:17:21.489 [2024-12-09T04:11:58.135Z] Total : 5611.93 21.92 0.00 0.00 22771.09 4758.48 33736.79 00:17:21.489 { 00:17:21.489 "results": [ 00:17:21.489 { 00:17:21.489 "job": "TLSTESTn1", 00:17:21.489 "core_mask": "0x4", 00:17:21.489 "workload": "verify", 00:17:21.489 "status": "finished", 00:17:21.489 "verify_range": { 00:17:21.489 "start": 0, 00:17:21.489 "length": 8192 00:17:21.489 }, 00:17:21.489 "queue_depth": 128, 00:17:21.489 "io_size": 4096, 00:17:21.489 "runtime": 10.016881, 00:17:21.489 "iops": 5611.926506863763, 00:17:21.489 "mibps": 21.921587917436575, 00:17:21.489 "io_failed": 0, 00:17:21.489 "io_timeout": 0, 00:17:21.489 "avg_latency_us": 22771.092528953795, 00:17:21.489 "min_latency_us": 4758.48347826087, 00:17:21.489 "max_latency_us": 33736.79304347826 00:17:21.489 } 00:17:21.489 ], 00:17:21.489 "core_count": 1 00:17:21.489 } 00:17:21.489 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:21.489 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3594533 00:17:21.489 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3594533 ']' 00:17:21.489 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3594533 00:17:21.489 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:21.489 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.489 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3594533 00:17:21.489 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:21.489 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:21.489 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3594533' 00:17:21.489 killing process with pid 3594533 00:17:21.489 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3594533 00:17:21.489 Received shutdown signal, test time was about 10.000000 seconds 00:17:21.489 00:17:21.489 Latency(us) 00:17:21.489 [2024-12-09T04:11:58.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.489 [2024-12-09T04:11:58.135Z] =================================================================================================================== 00:17:21.489 [2024-12-09T04:11:58.135Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:21.489 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3594533 00:17:21.489 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AVdmcJmk7w 00:17:21.489 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:21.489 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AVdmcJmk7w 00:17:21.489 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:21.489 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.489 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:21.489 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.489 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AVdmcJmk7w 00:17:21.489 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:21.489 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:21.489 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:21.489 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.AVdmcJmk7w 00:17:21.489 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:21.489 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3596278 00:17:21.489 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:21.489 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:21.490 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3596278 /var/tmp/bdevperf.sock 00:17:21.490 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3596278 ']' 00:17:21.490 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.490 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.490 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.490 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.490 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:21.749 [2024-12-09 05:11:58.152293] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:17:21.749 [2024-12-09 05:11:58.152348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3596278 ] 00:17:21.749 [2024-12-09 05:11:58.214638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.749 [2024-12-09 05:11:58.253579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.749 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.749 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:21.749 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AVdmcJmk7w 00:17:22.008 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:22.268 [2024-12-09 05:11:58.717923] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:22.268 [2024-12-09 05:11:58.722685] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:22.268 [2024-12-09 05:11:58.723305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9911a0 (107): Transport endpoint is not connected 00:17:22.268 [2024-12-09 05:11:58.724297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9911a0 (9): Bad file descriptor 00:17:22.268 [2024-12-09 05:11:58.725299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:22.268 [2024-12-09 05:11:58.725320] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:22.268 [2024-12-09 05:11:58.725328] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:22.268 [2024-12-09 05:11:58.725340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:22.268 request: 00:17:22.268 { 00:17:22.268 "name": "TLSTEST", 00:17:22.268 "trtype": "tcp", 00:17:22.268 "traddr": "10.0.0.2", 00:17:22.268 "adrfam": "ipv4", 00:17:22.268 "trsvcid": "4420", 00:17:22.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:22.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:22.268 "prchk_reftag": false, 00:17:22.268 "prchk_guard": false, 00:17:22.268 "hdgst": false, 00:17:22.268 "ddgst": false, 00:17:22.268 "psk": "key0", 00:17:22.268 "allow_unrecognized_csi": false, 00:17:22.268 "method": "bdev_nvme_attach_controller", 00:17:22.268 "req_id": 1 00:17:22.268 } 00:17:22.268 Got JSON-RPC error response 00:17:22.268 response: 00:17:22.268 { 00:17:22.268 "code": -5, 00:17:22.268 "message": "Input/output error" 00:17:22.268 } 00:17:22.268 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3596278 00:17:22.268 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3596278 ']' 00:17:22.268 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3596278 00:17:22.268 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:22.268 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.268 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3596278 00:17:22.268 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:22.268 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:22.268 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3596278' 00:17:22.268 killing process with pid 3596278 00:17:22.268 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3596278 00:17:22.268 Received shutdown signal, test time was about 10.000000 seconds 00:17:22.268 00:17:22.268 Latency(us) 00:17:22.268 [2024-12-09T04:11:58.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.268 [2024-12-09T04:11:58.914Z] =================================================================================================================== 00:17:22.268 [2024-12-09T04:11:58.914Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:22.268 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3596278 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.HIDeWI0IJm 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.HIDeWI0IJm 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.HIDeWI0IJm 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.HIDeWI0IJm 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3596506 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3596506 /var/tmp/bdevperf.sock 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3596506 ']' 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:22.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.528 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:22.528 [2024-12-09 05:11:59.027371] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:17:22.528 [2024-12-09 05:11:59.027420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3596506 ] 00:17:22.528 [2024-12-09 05:11:59.089131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.528 [2024-12-09 05:11:59.127061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.787 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.787 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:22.787 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.HIDeWI0IJm 00:17:22.787 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:17:23.046 [2024-12-09 05:11:59.570895] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:23.046 [2024-12-09 05:11:59.575643] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:23.046 [2024-12-09 05:11:59.575669] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:23.046 [2024-12-09 05:11:59.575695] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:23.046 [2024-12-09 05:11:59.576332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5921a0 (107): Transport endpoint is not connected 00:17:23.046 [2024-12-09 05:11:59.577324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5921a0 (9): Bad file descriptor 00:17:23.046 [2024-12-09 05:11:59.578326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:23.046 [2024-12-09 05:11:59.578341] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:23.046 [2024-12-09 05:11:59.578349] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:23.046 [2024-12-09 05:11:59.578357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:23.046 request: 00:17:23.046 { 00:17:23.046 "name": "TLSTEST", 00:17:23.046 "trtype": "tcp", 00:17:23.046 "traddr": "10.0.0.2", 00:17:23.046 "adrfam": "ipv4", 00:17:23.046 "trsvcid": "4420", 00:17:23.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:23.046 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:23.046 "prchk_reftag": false, 00:17:23.046 "prchk_guard": false, 00:17:23.046 "hdgst": false, 00:17:23.046 "ddgst": false, 00:17:23.046 "psk": "key0", 00:17:23.046 "allow_unrecognized_csi": false, 00:17:23.046 "method": "bdev_nvme_attach_controller", 00:17:23.046 "req_id": 1 00:17:23.046 } 00:17:23.046 Got JSON-RPC error response 00:17:23.046 response: 00:17:23.046 { 00:17:23.046 "code": -5, 00:17:23.046 "message": "Input/output error" 00:17:23.046 } 00:17:23.046 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3596506 00:17:23.046 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3596506 ']' 00:17:23.047 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3596506 00:17:23.047 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:23.047 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:23.047 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3596506 00:17:23.047 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:23.047 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:23.047 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3596506' 00:17:23.047 killing process with pid 3596506 00:17:23.047 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3596506 00:17:23.047 Received shutdown signal, test time was about 10.000000 seconds 00:17:23.047 00:17:23.047 Latency(us) 00:17:23.047 [2024-12-09T04:11:59.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.047 [2024-12-09T04:11:59.693Z] =================================================================================================================== 00:17:23.047 [2024-12-09T04:11:59.693Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:23.047 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3596506 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.HIDeWI0IJm 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.HIDeWI0IJm 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.HIDeWI0IJm 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.HIDeWI0IJm 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3596718 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3596718 /var/tmp/bdevperf.sock 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3596718 ']' 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:23.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.306 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:23.306 [2024-12-09 05:11:59.890929] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:17:23.306 [2024-12-09 05:11:59.890978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3596718 ] 00:17:23.565 [2024-12-09 05:11:59.952021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.565 [2024-12-09 05:11:59.990469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.565 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.565 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:23.565 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.HIDeWI0IJm 00:17:23.824 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:23.824 [2024-12-09 05:12:00.457602] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:23.824 [2024-12-09 05:12:00.466676] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:23.824 [2024-12-09 05:12:00.466702] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:23.824 [2024-12-09 05:12:00.466727] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:23.824 [2024-12-09 05:12:00.467111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7e1a0 (107): Transport endpoint is not connected 00:17:23.824 [2024-12-09 05:12:00.468105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7e1a0 (9): Bad file descriptor 00:17:24.083 [2024-12-09 05:12:00.469106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:17:24.083 [2024-12-09 05:12:00.469118] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:24.083 [2024-12-09 05:12:00.469126] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:17:24.083 [2024-12-09 05:12:00.469134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:17:24.083 request: 00:17:24.083 { 00:17:24.083 "name": "TLSTEST", 00:17:24.083 "trtype": "tcp", 00:17:24.083 "traddr": "10.0.0.2", 00:17:24.083 "adrfam": "ipv4", 00:17:24.083 "trsvcid": "4420", 00:17:24.083 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:24.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:24.083 "prchk_reftag": false, 00:17:24.083 "prchk_guard": false, 00:17:24.083 "hdgst": false, 00:17:24.083 "ddgst": false, 00:17:24.083 "psk": "key0", 00:17:24.083 "allow_unrecognized_csi": false, 00:17:24.083 "method": "bdev_nvme_attach_controller", 00:17:24.083 "req_id": 1 00:17:24.083 } 00:17:24.083 Got JSON-RPC error response 00:17:24.083 response: 00:17:24.083 { 00:17:24.083 "code": -5, 00:17:24.083 "message": "Input/output error" 00:17:24.083 } 00:17:24.083 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3596718 00:17:24.083 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3596718 ']' 00:17:24.083 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3596718 00:17:24.083 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:24.083 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:24.083 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3596718 00:17:24.083 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:24.083 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:24.083 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3596718' 00:17:24.083 killing process with pid 3596718 00:17:24.083 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3596718 00:17:24.083 Received shutdown signal, test time was about 10.000000 seconds 00:17:24.083 00:17:24.083 Latency(us) 00:17:24.083 [2024-12-09T04:12:00.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.083 [2024-12-09T04:12:00.729Z] =================================================================================================================== 00:17:24.083 [2024-12-09T04:12:00.729Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:24.083 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3596718 00:17:24.342 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:24.342 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:24.342 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.342 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.342 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.342 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3596807 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3596807 /var/tmp/bdevperf.sock 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3596807 ']' 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:24.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.343 [2024-12-09 05:12:00.785558] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:17:24.343 [2024-12-09 05:12:00.785611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3596807 ] 00:17:24.343 [2024-12-09 05:12:00.847133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.343 [2024-12-09 05:12:00.889573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:24.343 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:17:24.601 [2024-12-09 05:12:01.161429] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:17:24.601 [2024-12-09 05:12:01.161462] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:24.601 request: 00:17:24.601 { 00:17:24.601 "name": "key0", 00:17:24.601 "path": "", 00:17:24.601 "method": "keyring_file_add_key", 00:17:24.601 "req_id": 1 00:17:24.601 } 00:17:24.601 Got JSON-RPC error response 00:17:24.601 response: 00:17:24.601 { 00:17:24.601 "code": -1, 00:17:24.601 "message": "Operation not permitted" 00:17:24.601 } 00:17:24.601 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:24.860 [2024-12-09 05:12:01.362059] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:24.860 [2024-12-09 05:12:01.362090] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:24.860 request: 00:17:24.860 { 00:17:24.860 "name": "TLSTEST", 00:17:24.860 "trtype": "tcp", 00:17:24.860 "traddr": "10.0.0.2", 00:17:24.860 "adrfam": "ipv4", 00:17:24.860 "trsvcid": "4420", 00:17:24.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:24.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:24.860 "prchk_reftag": false, 00:17:24.860 "prchk_guard": false, 00:17:24.860 "hdgst": false, 00:17:24.860 "ddgst": false, 00:17:24.860 "psk": "key0", 00:17:24.860 "allow_unrecognized_csi": false, 00:17:24.860 "method": "bdev_nvme_attach_controller", 00:17:24.860 "req_id": 1 00:17:24.860 } 00:17:24.860 Got JSON-RPC error response 00:17:24.860 response: 00:17:24.860 { 00:17:24.860 "code": -126, 00:17:24.860 "message": "Required key not available" 00:17:24.860 } 00:17:24.860 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3596807 00:17:24.860 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3596807 ']' 00:17:24.860 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3596807 00:17:24.860 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:24.860 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:24.860 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3596807 00:17:24.860 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:24.860 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:24.860 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3596807' 00:17:24.860 killing process with pid 3596807 00:17:24.860 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3596807 00:17:24.860 Received shutdown signal, test time was about 10.000000 seconds 00:17:24.860 00:17:24.860 Latency(us) 00:17:24.860 [2024-12-09T04:12:01.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.860 [2024-12-09T04:12:01.506Z] =================================================================================================================== 00:17:24.860 [2024-12-09T04:12:01.506Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:24.860 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3596807 00:17:25.118 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:25.118 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:25.118 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:25.118 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:25.118 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:25.118 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3592064 00:17:25.118 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3592064 ']' 00:17:25.118 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3592064 00:17:25.118 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:25.118 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.118 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3592064 00:17:25.118 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:25.118 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:25.118 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3592064' 00:17:25.118 killing process with pid 3592064 00:17:25.118 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3592064 00:17:25.118 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3592064 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.gMWyOCJQJN 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.gMWyOCJQJN 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3597121 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3597121 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3597121 ']' 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.377 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:25.377 [2024-12-09 05:12:01.986835] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:17:25.377 [2024-12-09 05:12:01.986885] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.636 [2024-12-09 05:12:02.053405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.636 [2024-12-09 05:12:02.093255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.636 [2024-12-09 05:12:02.093293] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.636 [2024-12-09 05:12:02.093300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.636 [2024-12-09 05:12:02.093307] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.636 [2024-12-09 05:12:02.093312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.636 [2024-12-09 05:12:02.093905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.636 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.636 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:25.636 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:25.636 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:25.636 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:25.636 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.636 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.gMWyOCJQJN 00:17:25.636 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gMWyOCJQJN 00:17:25.636 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:25.895 [2024-12-09 05:12:02.402912] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.895 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:26.153 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:26.153 [2024-12-09 05:12:02.791915] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:26.153 [2024-12-09 05:12:02.792154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.411 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:26.411 malloc0 00:17:26.411 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:26.670 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gMWyOCJQJN 00:17:26.929 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:26.929 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gMWyOCJQJN 00:17:26.929 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:26.929 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:26.929 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:26.929 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gMWyOCJQJN 00:17:26.929 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:26.929 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3597391 00:17:26.929 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:26.929 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:26.929 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3597391 /var/tmp/bdevperf.sock 00:17:26.929 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3597391 ']' 00:17:26.929 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:26.929 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:26.929 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:26.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:26.929 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:26.929 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.187 [2024-12-09 05:12:03.603740] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:17:27.187 [2024-12-09 05:12:03.603789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3597391 ] 00:17:27.187 [2024-12-09 05:12:03.665103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.187 [2024-12-09 05:12:03.706049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.187 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.187 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:27.187 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gMWyOCJQJN 00:17:27.445 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:27.703 [2024-12-09 05:12:04.178572] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:27.703 TLSTESTn1 00:17:27.703 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:27.961 Running I/O for 10 seconds... 00:17:29.855 5368.00 IOPS, 20.97 MiB/s [2024-12-09T04:12:07.437Z] 5422.00 IOPS, 21.18 MiB/s [2024-12-09T04:12:08.812Z] 5407.33 IOPS, 21.12 MiB/s [2024-12-09T04:12:09.379Z] 5435.25 IOPS, 21.23 MiB/s [2024-12-09T04:12:10.754Z] 5456.80 IOPS, 21.32 MiB/s [2024-12-09T04:12:11.689Z] 5438.83 IOPS, 21.25 MiB/s [2024-12-09T04:12:12.625Z] 5446.57 IOPS, 21.28 MiB/s [2024-12-09T04:12:13.559Z] 5437.50 IOPS, 21.24 MiB/s [2024-12-09T04:12:14.494Z] 5443.33 IOPS, 21.26 MiB/s [2024-12-09T04:12:14.494Z] 5442.40 IOPS, 21.26 MiB/s 00:17:37.848 Latency(us) 00:17:37.848 [2024-12-09T04:12:14.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.848 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:37.848 Verification LBA range: start 0x0 length 0x2000 00:17:37.849 TLSTESTn1 : 10.02 5444.01 21.27 0.00 0.00 23470.86 6240.17 43082.80 00:17:37.849 [2024-12-09T04:12:14.495Z] =================================================================================================================== 00:17:37.849 [2024-12-09T04:12:14.495Z] Total : 5444.01 21.27 0.00 0.00 23470.86 6240.17 43082.80 00:17:37.849 { 00:17:37.849 "results": [ 00:17:37.849 { 00:17:37.849 "job": "TLSTESTn1", 00:17:37.849 "core_mask": "0x4", 00:17:37.849 "workload": "verify", 00:17:37.849 "status": "finished", 00:17:37.849 "verify_range": { 00:17:37.849 "start": 0, 00:17:37.849 "length": 8192 00:17:37.849 }, 00:17:37.849 "queue_depth": 128, 00:17:37.849 "io_size": 4096, 00:17:37.849 "runtime": 10.02056, 00:17:37.849 "iops": 5444.007121358487, 00:17:37.849 "mibps": 21.26565281780659, 00:17:37.849 "io_failed": 0, 00:17:37.849 "io_timeout": 0, 00:17:37.849 "avg_latency_us": 23470.864533241518, 00:17:37.849 "min_latency_us": 6240.166956521739, 00:17:37.849 "max_latency_us": 43082.79652173913 00:17:37.849 } 00:17:37.849 ], 00:17:37.849 "core_count": 1 00:17:37.849 } 00:17:37.849 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:37.849 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3597391 00:17:37.849 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3597391 ']' 00:17:37.849 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3597391 00:17:37.849 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:37.849 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:37.849 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3597391 00:17:37.849 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:37.849 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:37.849 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3597391' 00:17:37.849 killing process with pid 3597391 00:17:37.849 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3597391 00:17:37.849 Received shutdown signal, test time was about 10.000000 seconds 00:17:37.849 00:17:37.849 Latency(us) 00:17:37.849 [2024-12-09T04:12:14.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.849 [2024-12-09T04:12:14.495Z] =================================================================================================================== 00:17:37.849 [2024-12-09T04:12:14.495Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:37.849 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3597391 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.gMWyOCJQJN 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gMWyOCJQJN 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gMWyOCJQJN 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gMWyOCJQJN 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gMWyOCJQJN 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3599617 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3599617 /var/tmp/bdevperf.sock 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3599617 ']' 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.108 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.108 [2024-12-09 05:12:14.727166] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:17:38.108 [2024-12-09 05:12:14.727214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3599617 ] 00:17:38.373 [2024-12-09 05:12:14.787266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.373 [2024-12-09 05:12:14.824573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.373 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.373 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:38.373 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gMWyOCJQJN 00:17:38.631 [2024-12-09 05:12:15.087772] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gMWyOCJQJN': 0100666 00:17:38.631 [2024-12-09 05:12:15.087803] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:38.631 request: 00:17:38.631 { 00:17:38.631 "name": "key0", 00:17:38.631 "path": "/tmp/tmp.gMWyOCJQJN", 00:17:38.631 "method": "keyring_file_add_key", 00:17:38.631 "req_id": 1 00:17:38.631 } 00:17:38.631 Got JSON-RPC error response 00:17:38.631 response: 00:17:38.631 { 00:17:38.631 "code": -1, 00:17:38.631 "message": "Operation not permitted" 00:17:38.631 } 00:17:38.631 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:38.889 [2024-12-09 05:12:15.288378] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:38.889 [2024-12-09 05:12:15.288411] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:38.889 request: 00:17:38.889 { 00:17:38.889 "name": "TLSTEST", 00:17:38.889 "trtype": "tcp", 00:17:38.889 "traddr": "10.0.0.2", 00:17:38.889 "adrfam": "ipv4", 00:17:38.889 "trsvcid": "4420", 00:17:38.889 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.889 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.889 "prchk_reftag": false, 00:17:38.889 "prchk_guard": false, 00:17:38.889 "hdgst": false, 00:17:38.889 "ddgst": false, 00:17:38.889 "psk": "key0", 00:17:38.889 "allow_unrecognized_csi": false, 00:17:38.889 "method": "bdev_nvme_attach_controller", 00:17:38.889 "req_id": 1 00:17:38.889 } 00:17:38.889 Got JSON-RPC error response 00:17:38.889 response: 00:17:38.889 { 00:17:38.889 "code": -126, 00:17:38.889 "message": "Required key not available" 00:17:38.889 } 00:17:38.889 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3599617 00:17:38.889 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3599617 ']' 00:17:38.889 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3599617 00:17:38.889 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:38.889 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:38.889 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3599617 00:17:38.889 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:38.889 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:38.889 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3599617' 00:17:38.889 killing process with pid 3599617 00:17:38.889 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3599617 00:17:38.889 Received shutdown signal, test time was about 10.000000 seconds 00:17:38.889 00:17:38.889 Latency(us) 00:17:38.889 [2024-12-09T04:12:15.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.889 [2024-12-09T04:12:15.535Z] =================================================================================================================== 00:17:38.889 [2024-12-09T04:12:15.535Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:38.889 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3599617 00:17:39.148 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:39.148 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:39.148 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:39.148 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:39.148 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:39.148 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3597121 00:17:39.148 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3597121 ']' 00:17:39.148 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3597121 00:17:39.148 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:39.148 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.148 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3597121 00:17:39.148 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:39.148 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:39.148 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3597121' 00:17:39.148 killing process with pid 3597121 00:17:39.148 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3597121 00:17:39.148 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3597121 00:17:39.406 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:17:39.406 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:39.406 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:39.406 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.406 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3599857 00:17:39.406 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:39.406 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3599857 00:17:39.406 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3599857 ']' 00:17:39.406 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.406 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.406 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.406 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.406 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.406 [2024-12-09 05:12:15.849665] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:17:39.406 [2024-12-09 05:12:15.849711] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.406 [2024-12-09 05:12:15.914217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.406 [2024-12-09 05:12:15.952959] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.406 [2024-12-09 05:12:15.952990] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.406 [2024-12-09 05:12:15.953003] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.406 [2024-12-09 05:12:15.953013] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.406 [2024-12-09 05:12:15.953020] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.406 [2024-12-09 05:12:15.953586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.407 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.407 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:39.407 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:39.407 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:39.407 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.665 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.665 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.gMWyOCJQJN 00:17:39.665 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:39.665 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.gMWyOCJQJN 00:17:39.665 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:17:39.665 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.665 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:17:39.665 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.665 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.gMWyOCJQJN 00:17:39.665 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gMWyOCJQJN 00:17:39.665 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:39.665 [2024-12-09 05:12:16.253794] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.665 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:39.923 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:40.180 [2024-12-09 05:12:16.642801] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:40.180 [2024-12-09 05:12:16.643018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.180 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:40.439 malloc0 00:17:40.439 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:40.439 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gMWyOCJQJN 00:17:40.750 [2024-12-09 05:12:17.204098] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gMWyOCJQJN': 0100666 00:17:40.750 [2024-12-09 05:12:17.204122] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:40.750 request: 00:17:40.750 { 00:17:40.750 "name": "key0", 00:17:40.750 "path": "/tmp/tmp.gMWyOCJQJN", 00:17:40.750 "method": "keyring_file_add_key", 00:17:40.750 "req_id": 1 00:17:40.750 } 00:17:40.750 Got JSON-RPC error response 00:17:40.750 response: 00:17:40.750 { 00:17:40.750 "code": -1, 00:17:40.750 "message": "Operation not permitted" 00:17:40.750 } 00:17:40.750 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:41.051 [2024-12-09 05:12:17.384590] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:17:41.051 [2024-12-09 05:12:17.384623] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:41.051 request: 00:17:41.051 { 00:17:41.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.051 "host": "nqn.2016-06.io.spdk:host1", 00:17:41.051 "psk": "key0", 00:17:41.051 "method": "nvmf_subsystem_add_host", 00:17:41.051 "req_id": 1 00:17:41.051 } 00:17:41.051 Got JSON-RPC error response 00:17:41.051 response: 00:17:41.051 { 00:17:41.051 "code": -32603, 00:17:41.051 "message": "Internal error" 00:17:41.051 } 00:17:41.051 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:41.051 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:41.051 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:41.051 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:41.051 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3599857 00:17:41.051 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3599857 ']' 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3599857 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3599857 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3599857' 00:17:41.052 killing process with pid 3599857 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3599857 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3599857 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.gMWyOCJQJN 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3600119 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3600119 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3600119 ']' 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.052 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:41.360 [2024-12-09 05:12:17.708171] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:17:41.360 [2024-12-09 05:12:17.708236] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.360 [2024-12-09 05:12:17.776120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.360 [2024-12-09 05:12:17.816942] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.360 [2024-12-09 05:12:17.816976] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.360 [2024-12-09 05:12:17.816983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.360 [2024-12-09 05:12:17.816989] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.360 [2024-12-09 05:12:17.816995] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.360 [2024-12-09 05:12:17.817548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.360 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.360 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:41.360 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:41.360 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:41.360 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:41.360 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.360 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.gMWyOCJQJN 00:17:41.360 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gMWyOCJQJN 00:17:41.360 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:41.620 [2024-12-09 05:12:18.122448] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.620 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:41.878 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:41.878 [2024-12-09 05:12:18.495424] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:41.878 [2024-12-09 05:12:18.495649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.878 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:42.136 malloc0 00:17:42.136 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:42.395 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gMWyOCJQJN 00:17:42.655 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:42.655 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3600428 00:17:42.655 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:42.655 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:42.655 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3600428 /var/tmp/bdevperf.sock 00:17:42.655 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3600428 ']' 00:17:42.655 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:42.655 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:42.655 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:42.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:42.655 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:42.655 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.655 [2024-12-09 05:12:19.288906] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:17:42.655 [2024-12-09 05:12:19.288958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3600428 ] 00:17:42.914 [2024-12-09 05:12:19.350411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.914 [2024-12-09 05:12:19.392264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.914 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:42.914 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:42.914 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gMWyOCJQJN 00:17:43.172 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:43.431 [2024-12-09 05:12:19.840108] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:43.431 TLSTESTn1 00:17:43.431 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:43.689 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:17:43.689 "subsystems": [ 00:17:43.689 { 00:17:43.689 "subsystem": "keyring", 00:17:43.689 "config": [ 00:17:43.689 { 00:17:43.689 "method": "keyring_file_add_key", 00:17:43.689 "params": { 00:17:43.689 "name": "key0", 00:17:43.689 "path": "/tmp/tmp.gMWyOCJQJN" 00:17:43.689 } 00:17:43.689 } 00:17:43.689 ] 00:17:43.689 }, 00:17:43.689 { 00:17:43.689 "subsystem": "iobuf", 00:17:43.689 "config": [ 00:17:43.689 { 00:17:43.689 "method": "iobuf_set_options", 00:17:43.689 "params": { 00:17:43.689 "small_pool_count": 8192, 00:17:43.689 "large_pool_count": 1024, 00:17:43.689 "small_bufsize": 8192, 00:17:43.689 "large_bufsize": 135168, 00:17:43.689 "enable_numa": false 00:17:43.689 } 00:17:43.689 } 00:17:43.689 ] 00:17:43.689 }, 00:17:43.689 { 00:17:43.689 "subsystem": "sock", 00:17:43.689 "config": [ 00:17:43.689 { 00:17:43.689 "method": "sock_set_default_impl", 00:17:43.689 "params": { 00:17:43.689 "impl_name": "posix" 00:17:43.689 } 00:17:43.689 }, 00:17:43.689 { 00:17:43.689 "method": "sock_impl_set_options", 00:17:43.689 "params": { 00:17:43.689 "impl_name": "ssl", 00:17:43.689 "recv_buf_size": 4096, 00:17:43.689 "send_buf_size": 4096, 00:17:43.689 "enable_recv_pipe": true, 00:17:43.689 "enable_quickack": false, 00:17:43.689 "enable_placement_id": 0, 00:17:43.689 "enable_zerocopy_send_server": true, 00:17:43.689 "enable_zerocopy_send_client": false, 00:17:43.689 "zerocopy_threshold": 0, 00:17:43.689 "tls_version": 0, 00:17:43.689 "enable_ktls": false 00:17:43.689 } 00:17:43.689 }, 00:17:43.689 { 00:17:43.689 "method": "sock_impl_set_options", 00:17:43.689 "params": { 00:17:43.689 "impl_name": "posix", 00:17:43.689 "recv_buf_size": 2097152, 00:17:43.689 "send_buf_size": 2097152, 00:17:43.689 "enable_recv_pipe": true, 00:17:43.689 "enable_quickack": false, 00:17:43.689 "enable_placement_id": 0, 00:17:43.689 "enable_zerocopy_send_server": true, 00:17:43.689 "enable_zerocopy_send_client": false, 00:17:43.689 "zerocopy_threshold": 0, 00:17:43.689 "tls_version": 0, 00:17:43.689 "enable_ktls": false 00:17:43.689 } 00:17:43.689 } 00:17:43.689 ] 00:17:43.689 }, 00:17:43.689 { 00:17:43.689 "subsystem": "vmd", 00:17:43.689 "config": [] 00:17:43.689 }, 00:17:43.689 { 00:17:43.689 "subsystem": "accel", 00:17:43.689 "config": [ 00:17:43.689 { 00:17:43.689 "method": "accel_set_options", 00:17:43.689 "params": { 00:17:43.689 "small_cache_size": 128, 00:17:43.689 "large_cache_size": 16, 00:17:43.689 "task_count": 2048, 00:17:43.689 "sequence_count": 2048, 00:17:43.689 "buf_count": 2048 00:17:43.689 } 00:17:43.689 } 00:17:43.689 ] 00:17:43.689 }, 00:17:43.689 { 00:17:43.689 "subsystem": "bdev", 00:17:43.689 "config": [ 00:17:43.689 { 00:17:43.689 "method": "bdev_set_options", 00:17:43.690 "params": { 00:17:43.690 "bdev_io_pool_size": 65535, 00:17:43.690 "bdev_io_cache_size": 256, 00:17:43.690 "bdev_auto_examine": true, 00:17:43.690 "iobuf_small_cache_size": 128, 00:17:43.690 "iobuf_large_cache_size": 16 00:17:43.690 } 00:17:43.690 }, 00:17:43.690 { 00:17:43.690 "method": "bdev_raid_set_options", 00:17:43.690 "params": { 00:17:43.690 "process_window_size_kb": 1024, 00:17:43.690 "process_max_bandwidth_mb_sec": 0 00:17:43.690 } 00:17:43.690 }, 00:17:43.690 { 00:17:43.690 "method": "bdev_iscsi_set_options", 00:17:43.690 "params": { 00:17:43.690 "timeout_sec": 30 00:17:43.690 } 00:17:43.690 }, 00:17:43.690 { 00:17:43.690 "method": "bdev_nvme_set_options", 00:17:43.690 "params": { 00:17:43.690 "action_on_timeout": "none", 00:17:43.690 "timeout_us": 0, 00:17:43.690 "timeout_admin_us": 0, 00:17:43.690 "keep_alive_timeout_ms": 10000, 00:17:43.690 "arbitration_burst": 0, 00:17:43.690 "low_priority_weight": 0, 00:17:43.690 "medium_priority_weight": 0, 00:17:43.690 "high_priority_weight": 0, 00:17:43.690 "nvme_adminq_poll_period_us": 10000, 00:17:43.690 "nvme_ioq_poll_period_us": 0, 00:17:43.690 "io_queue_requests": 0, 00:17:43.690 "delay_cmd_submit": true, 00:17:43.690 "transport_retry_count": 4, 00:17:43.690 "bdev_retry_count": 3, 00:17:43.690 "transport_ack_timeout": 0, 00:17:43.690 "ctrlr_loss_timeout_sec": 0, 00:17:43.690 "reconnect_delay_sec": 0, 00:17:43.690 "fast_io_fail_timeout_sec": 0, 00:17:43.690 "disable_auto_failback": false, 00:17:43.690 "generate_uuids": false, 00:17:43.690 "transport_tos": 0, 00:17:43.690 "nvme_error_stat": false, 00:17:43.690 "rdma_srq_size": 0, 00:17:43.690 "io_path_stat": false, 00:17:43.690 "allow_accel_sequence": false, 00:17:43.690 "rdma_max_cq_size": 0, 00:17:43.690 "rdma_cm_event_timeout_ms": 0, 00:17:43.690 "dhchap_digests": [ 00:17:43.690 "sha256", 00:17:43.690 "sha384", 00:17:43.690 "sha512" 00:17:43.690 ], 00:17:43.690 "dhchap_dhgroups": [ 00:17:43.690 "null", 00:17:43.690 "ffdhe2048", 00:17:43.690 "ffdhe3072", 00:17:43.690 "ffdhe4096", 00:17:43.690 "ffdhe6144", 00:17:43.690 "ffdhe8192" 00:17:43.690 ] 00:17:43.690 } 00:17:43.690 }, 00:17:43.690 { 00:17:43.690 "method": "bdev_nvme_set_hotplug", 00:17:43.690 "params": { 00:17:43.690 "period_us": 100000, 00:17:43.690 "enable": false 00:17:43.690 } 00:17:43.690 }, 00:17:43.690 { 00:17:43.690 "method": "bdev_malloc_create", 00:17:43.690 "params": { 00:17:43.690 "name": "malloc0", 00:17:43.690 "num_blocks": 8192, 00:17:43.690 "block_size": 4096, 00:17:43.690 "physical_block_size": 4096, 00:17:43.690 "uuid": "4632a882-ed2d-4982-95a8-6d3a05e94e70", 00:17:43.690 "optimal_io_boundary": 0, 00:17:43.690 "md_size": 0, 00:17:43.690 "dif_type": 0, 00:17:43.690 "dif_is_head_of_md": false, 00:17:43.690 "dif_pi_format": 0 00:17:43.690 } 00:17:43.690 }, 00:17:43.690 { 00:17:43.690 "method": "bdev_wait_for_examine" 00:17:43.690 } 00:17:43.690 ] 00:17:43.690 }, 00:17:43.690 { 00:17:43.690 "subsystem": "nbd", 00:17:43.690 "config": [] 00:17:43.690 }, 00:17:43.690 { 00:17:43.690 "subsystem": "scheduler", 00:17:43.690 "config": [ 00:17:43.690 { 00:17:43.690 "method": "framework_set_scheduler", 00:17:43.690 "params": { 00:17:43.690 "name": "static" 00:17:43.690 } 00:17:43.690 } 00:17:43.690 ] 00:17:43.690 }, 00:17:43.690 { 00:17:43.690 "subsystem": "nvmf", 00:17:43.690 "config": [ 00:17:43.690 { 00:17:43.690 "method": "nvmf_set_config", 00:17:43.690 "params": { 00:17:43.690 "discovery_filter": "match_any", 00:17:43.690 "admin_cmd_passthru": { 00:17:43.690 "identify_ctrlr": false 00:17:43.690 }, 00:17:43.690 "dhchap_digests": [ 00:17:43.690 "sha256", 00:17:43.690 "sha384", 00:17:43.690 "sha512" 00:17:43.690 ], 00:17:43.690 "dhchap_dhgroups": [ 00:17:43.690 "null", 00:17:43.690 "ffdhe2048", 00:17:43.690 "ffdhe3072", 00:17:43.690 "ffdhe4096", 00:17:43.690 "ffdhe6144", 00:17:43.690 "ffdhe8192" 00:17:43.690 ] 00:17:43.690 } 00:17:43.690 }, 00:17:43.690 { 00:17:43.690 "method": "nvmf_set_max_subsystems", 00:17:43.690 "params": { 00:17:43.690 "max_subsystems": 1024 00:17:43.690 } 00:17:43.690 }, 00:17:43.690 { 00:17:43.690 "method": "nvmf_set_crdt", 00:17:43.690 "params": { 00:17:43.690 "crdt1": 0, 00:17:43.690 "crdt2": 0, 00:17:43.690 "crdt3": 0 00:17:43.690 } 00:17:43.690 }, 00:17:43.690 { 00:17:43.690 "method": "nvmf_create_transport", 00:17:43.690 "params": { 00:17:43.690 "trtype": "TCP", 00:17:43.690 "max_queue_depth": 128, 00:17:43.690 "max_io_qpairs_per_ctrlr": 127, 00:17:43.690 "in_capsule_data_size": 4096, 00:17:43.690 "max_io_size": 131072, 00:17:43.690 "io_unit_size": 131072, 00:17:43.690 "max_aq_depth": 128, 00:17:43.690 "num_shared_buffers": 511, 00:17:43.690 "buf_cache_size": 4294967295, 00:17:43.690 "dif_insert_or_strip": false, 00:17:43.690 "zcopy": false, 00:17:43.690 "c2h_success": false, 00:17:43.690 "sock_priority": 0, 00:17:43.690 "abort_timeout_sec": 1, 00:17:43.690 "ack_timeout": 0, 00:17:43.690 "data_wr_pool_size": 0 00:17:43.690 } 00:17:43.690 }, 00:17:43.690 { 00:17:43.690 "method": "nvmf_create_subsystem", 00:17:43.690 "params": { 00:17:43.690 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.690 "allow_any_host": false, 00:17:43.690 "serial_number": "SPDK00000000000001", 00:17:43.690 "model_number": "SPDK bdev Controller", 00:17:43.690 "max_namespaces": 10, 00:17:43.690 "min_cntlid": 1, 00:17:43.690 "max_cntlid": 65519, 00:17:43.690 "ana_reporting": false 00:17:43.690 } 00:17:43.690 }, 00:17:43.690 { 00:17:43.690 "method": "nvmf_subsystem_add_host", 00:17:43.690 "params": { 00:17:43.690 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.690 "host": "nqn.2016-06.io.spdk:host1", 00:17:43.690 "psk": "key0" 00:17:43.690 } 00:17:43.690 }, 00:17:43.690 { 00:17:43.690 "method": "nvmf_subsystem_add_ns", 00:17:43.690 "params": { 00:17:43.690 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.690 "namespace": { 00:17:43.690 "nsid": 1, 00:17:43.690 "bdev_name": "malloc0", 00:17:43.690 "nguid": "4632A882ED2D498295A86D3A05E94E70", 00:17:43.690 "uuid": "4632a882-ed2d-4982-95a8-6d3a05e94e70", 00:17:43.690 "no_auto_visible": false 00:17:43.690 } 00:17:43.690 } 00:17:43.690 }, 00:17:43.690 { 00:17:43.690 "method": "nvmf_subsystem_add_listener", 00:17:43.690 "params": { 00:17:43.690 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.690 "listen_address": { 00:17:43.690 "trtype": "TCP", 00:17:43.690 "adrfam": "IPv4", 00:17:43.690 "traddr": "10.0.0.2", 00:17:43.690 "trsvcid": "4420" 00:17:43.690 }, 00:17:43.690 "secure_channel": true 00:17:43.690 } 00:17:43.690 } 00:17:43.690 ] 00:17:43.690 } 00:17:43.690 ] 00:17:43.690 }' 00:17:43.690 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:43.950 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:17:43.950 "subsystems": [ 00:17:43.950 { 00:17:43.950 "subsystem": "keyring", 00:17:43.950 "config": [ 00:17:43.950 { 00:17:43.950 "method": "keyring_file_add_key", 00:17:43.950 "params": { 00:17:43.950 "name": "key0", 00:17:43.950 "path": "/tmp/tmp.gMWyOCJQJN" 00:17:43.950 } 00:17:43.950 } 00:17:43.950 ] 00:17:43.950 }, 00:17:43.950 { 00:17:43.950 "subsystem": "iobuf", 00:17:43.950 "config": [ 00:17:43.950 { 00:17:43.950 "method": "iobuf_set_options", 00:17:43.950 "params": { 00:17:43.950 "small_pool_count": 8192, 00:17:43.950 "large_pool_count": 1024, 00:17:43.950 "small_bufsize": 8192, 00:17:43.950 "large_bufsize": 135168, 00:17:43.950 "enable_numa": false 00:17:43.950 } 00:17:43.950 } 00:17:43.950 ] 00:17:43.950 }, 00:17:43.950 { 00:17:43.950 "subsystem": "sock", 00:17:43.950 "config": [ 00:17:43.950 { 00:17:43.950 "method": "sock_set_default_impl", 00:17:43.950 "params": { 00:17:43.950 "impl_name": "posix" 00:17:43.950 } 00:17:43.950 }, 00:17:43.950 { 00:17:43.950 "method": "sock_impl_set_options", 00:17:43.950 "params": { 00:17:43.950 "impl_name": "ssl", 00:17:43.950 "recv_buf_size": 4096, 00:17:43.950 "send_buf_size": 4096, 00:17:43.950 "enable_recv_pipe": true, 00:17:43.950 "enable_quickack": false, 00:17:43.950 "enable_placement_id": 0, 00:17:43.950 "enable_zerocopy_send_server": true, 00:17:43.950 "enable_zerocopy_send_client": false, 00:17:43.950 "zerocopy_threshold": 0, 00:17:43.950 "tls_version": 0, 00:17:43.950 "enable_ktls": false 00:17:43.950 } 00:17:43.950 }, 00:17:43.950 { 00:17:43.950 "method": "sock_impl_set_options", 00:17:43.950 "params": { 00:17:43.950 "impl_name": "posix", 00:17:43.950 "recv_buf_size": 2097152, 00:17:43.950 "send_buf_size": 2097152, 00:17:43.950 "enable_recv_pipe": true, 00:17:43.950 "enable_quickack": false, 00:17:43.950 "enable_placement_id": 0, 00:17:43.950 "enable_zerocopy_send_server": true, 00:17:43.950 "enable_zerocopy_send_client": false, 00:17:43.950 "zerocopy_threshold": 0, 00:17:43.950 "tls_version": 0, 00:17:43.950 "enable_ktls": false 00:17:43.950 } 00:17:43.950 } 00:17:43.950 ] 00:17:43.950 }, 00:17:43.950 { 00:17:43.950 "subsystem": "vmd", 00:17:43.950 "config": [] 00:17:43.950 }, 00:17:43.950 { 00:17:43.950 "subsystem": "accel", 00:17:43.950 "config": [ 00:17:43.950 { 00:17:43.950 "method": "accel_set_options", 00:17:43.950 "params": { 00:17:43.950 "small_cache_size": 128, 00:17:43.950 "large_cache_size": 16, 00:17:43.950 "task_count": 2048, 00:17:43.950 "sequence_count": 2048, 00:17:43.951 "buf_count": 2048 00:17:43.951 } 00:17:43.951 } 00:17:43.951 ] 00:17:43.951 }, 00:17:43.951 { 00:17:43.951 "subsystem": "bdev", 00:17:43.951 "config": [ 00:17:43.951 { 00:17:43.951 "method": "bdev_set_options", 00:17:43.951 "params": { 00:17:43.951 "bdev_io_pool_size": 65535, 00:17:43.951 "bdev_io_cache_size": 256, 00:17:43.951 "bdev_auto_examine": true, 00:17:43.951 "iobuf_small_cache_size": 128, 00:17:43.951 "iobuf_large_cache_size": 16 00:17:43.951 } 00:17:43.951 }, 00:17:43.951 { 00:17:43.951 "method": "bdev_raid_set_options", 00:17:43.951 "params": { 00:17:43.951 "process_window_size_kb": 1024, 00:17:43.951 "process_max_bandwidth_mb_sec": 0 00:17:43.951 } 00:17:43.951 }, 00:17:43.951 { 00:17:43.951 "method": "bdev_iscsi_set_options", 00:17:43.951 "params": { 00:17:43.951 "timeout_sec": 30 00:17:43.951 } 00:17:43.951 }, 00:17:43.951 { 00:17:43.951 "method": "bdev_nvme_set_options", 00:17:43.951 "params": { 00:17:43.951 "action_on_timeout": "none", 00:17:43.951 "timeout_us": 0, 00:17:43.951 "timeout_admin_us": 0, 00:17:43.951 "keep_alive_timeout_ms": 10000, 00:17:43.951 "arbitration_burst": 0, 00:17:43.951 "low_priority_weight": 0, 00:17:43.951 "medium_priority_weight": 0, 00:17:43.951 "high_priority_weight": 0, 00:17:43.951 "nvme_adminq_poll_period_us": 10000, 00:17:43.951 "nvme_ioq_poll_period_us": 0, 00:17:43.951 "io_queue_requests": 512, 00:17:43.951 "delay_cmd_submit": true, 00:17:43.951 "transport_retry_count": 4, 00:17:43.951 "bdev_retry_count": 3, 00:17:43.951 "transport_ack_timeout": 0, 00:17:43.951 "ctrlr_loss_timeout_sec": 0, 00:17:43.951 "reconnect_delay_sec": 0, 00:17:43.951 "fast_io_fail_timeout_sec": 0, 00:17:43.951 "disable_auto_failback": false, 00:17:43.951 "generate_uuids": false, 00:17:43.951 "transport_tos": 0, 00:17:43.951 "nvme_error_stat": false, 00:17:43.951 "rdma_srq_size": 0, 00:17:43.951 "io_path_stat": false, 00:17:43.951 "allow_accel_sequence": false, 00:17:43.951 "rdma_max_cq_size": 0, 00:17:43.951 "rdma_cm_event_timeout_ms": 0, 00:17:43.951 "dhchap_digests": [ 00:17:43.951 "sha256", 00:17:43.951 "sha384", 00:17:43.951 "sha512" 00:17:43.951 ], 00:17:43.951 "dhchap_dhgroups": [ 00:17:43.951 "null", 00:17:43.951 "ffdhe2048", 00:17:43.951 "ffdhe3072", 00:17:43.951 "ffdhe4096", 00:17:43.951 "ffdhe6144", 00:17:43.951 "ffdhe8192" 00:17:43.951 ] 00:17:43.951 } 00:17:43.951 }, 00:17:43.951 { 00:17:43.951 "method": "bdev_nvme_attach_controller", 00:17:43.951 "params": { 00:17:43.951 "name": "TLSTEST", 00:17:43.951 "trtype": "TCP", 00:17:43.951 "adrfam": "IPv4", 00:17:43.951 "traddr": "10.0.0.2", 00:17:43.951 "trsvcid": "4420", 00:17:43.951 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.951 "prchk_reftag": false, 00:17:43.951 "prchk_guard": false, 00:17:43.951 "ctrlr_loss_timeout_sec": 0, 00:17:43.951 "reconnect_delay_sec": 0, 00:17:43.951 "fast_io_fail_timeout_sec": 0, 00:17:43.951 "psk": "key0", 00:17:43.951 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.951 "hdgst": false, 00:17:43.951 "ddgst": false, 00:17:43.951 "multipath": "multipath" 00:17:43.951 } 00:17:43.951 }, 00:17:43.951 { 00:17:43.951 "method": "bdev_nvme_set_hotplug", 00:17:43.951 "params": { 00:17:43.951 "period_us": 100000, 00:17:43.951 "enable": false 00:17:43.951 } 00:17:43.951 }, 00:17:43.951 { 00:17:43.951 "method": "bdev_wait_for_examine" 00:17:43.951 } 00:17:43.951 ] 00:17:43.951 }, 00:17:43.951 { 00:17:43.951 "subsystem": "nbd", 00:17:43.951 "config": [] 00:17:43.951 } 00:17:43.951 ] 00:17:43.951 }' 00:17:43.951 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3600428 00:17:43.951 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3600428 ']' 00:17:43.951 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3600428 00:17:43.951 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:43.951 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.951 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3600428 00:17:43.951 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:43.951 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:43.951 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3600428' 00:17:43.951 killing process with pid 3600428 00:17:43.951 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3600428 00:17:43.951 Received shutdown signal, test time was about 10.000000 seconds 00:17:43.951 00:17:43.951 Latency(us) 00:17:43.951 [2024-12-09T04:12:20.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.951 [2024-12-09T04:12:20.597Z] =================================================================================================================== 00:17:43.951 [2024-12-09T04:12:20.597Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:43.951 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3600428 00:17:44.210 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3600119 00:17:44.210 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3600119 ']' 00:17:44.210 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3600119 00:17:44.210 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:44.210 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:44.210 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3600119 00:17:44.210 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:44.210 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:44.210 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3600119' 00:17:44.210 killing process with pid 3600119 00:17:44.210 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3600119 00:17:44.210 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3600119 00:17:44.468 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:44.468 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:44.468 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:44.468 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:17:44.468 "subsystems": [ 00:17:44.468 { 00:17:44.468 "subsystem": "keyring", 00:17:44.468 "config": [ 00:17:44.468 { 00:17:44.468 "method": "keyring_file_add_key", 00:17:44.468 "params": { 00:17:44.468 "name": "key0", 00:17:44.468 "path": "/tmp/tmp.gMWyOCJQJN" 00:17:44.468 } 00:17:44.468 } 00:17:44.468 ] 00:17:44.468 }, 00:17:44.468 { 00:17:44.468 "subsystem": "iobuf", 00:17:44.468 "config": [ 00:17:44.468 { 00:17:44.468 "method": "iobuf_set_options", 00:17:44.468 "params": { 00:17:44.468 "small_pool_count": 8192, 00:17:44.468 "large_pool_count": 1024, 00:17:44.468 "small_bufsize": 8192, 00:17:44.468 "large_bufsize": 135168, 00:17:44.468 "enable_numa": false 00:17:44.468 } 00:17:44.468 } 00:17:44.468 ] 00:17:44.468 }, 00:17:44.468 { 00:17:44.468 "subsystem": "sock", 00:17:44.468 "config": [ 00:17:44.468 { 00:17:44.468 "method": "sock_set_default_impl", 00:17:44.468 "params": { 00:17:44.468 "impl_name": "posix" 00:17:44.468 } 00:17:44.468 }, 00:17:44.468 { 00:17:44.468 "method": "sock_impl_set_options", 00:17:44.468 "params": { 00:17:44.468 "impl_name": "ssl", 00:17:44.468 "recv_buf_size": 4096, 00:17:44.468 "send_buf_size": 4096, 00:17:44.468 "enable_recv_pipe": true, 00:17:44.468 "enable_quickack": false, 00:17:44.468 "enable_placement_id": 0, 00:17:44.468 "enable_zerocopy_send_server": true, 00:17:44.468 "enable_zerocopy_send_client": false, 00:17:44.468 "zerocopy_threshold": 0, 00:17:44.468 "tls_version": 0, 00:17:44.468 "enable_ktls": false 00:17:44.468 } 00:17:44.468 }, 00:17:44.468 { 00:17:44.468 "method": "sock_impl_set_options", 00:17:44.468 "params": { 00:17:44.468 "impl_name": "posix", 00:17:44.468 "recv_buf_size": 2097152, 00:17:44.468 "send_buf_size": 2097152, 00:17:44.468 "enable_recv_pipe": true, 00:17:44.468 "enable_quickack": false, 00:17:44.468 "enable_placement_id": 0, 00:17:44.468 "enable_zerocopy_send_server": true, 00:17:44.468 "enable_zerocopy_send_client": false, 00:17:44.468 "zerocopy_threshold": 0, 00:17:44.468 "tls_version": 0, 00:17:44.468 "enable_ktls": false 00:17:44.468 } 00:17:44.468 } 00:17:44.468 ] 00:17:44.469 }, 00:17:44.469 { 00:17:44.469 "subsystem": "vmd", 00:17:44.469 "config": [] 00:17:44.469 }, 00:17:44.469 { 00:17:44.469 "subsystem": "accel", 00:17:44.469 "config": [ 00:17:44.469 { 00:17:44.469 "method": "accel_set_options", 00:17:44.469 "params": { 00:17:44.469 "small_cache_size": 128, 00:17:44.469 "large_cache_size": 16, 00:17:44.469 "task_count": 2048, 00:17:44.469 "sequence_count": 2048, 00:17:44.469 "buf_count": 2048 00:17:44.469 } 00:17:44.469 } 00:17:44.469 ] 00:17:44.469 }, 00:17:44.469 { 00:17:44.469 "subsystem": "bdev", 00:17:44.469 "config": [ 00:17:44.469 { 00:17:44.469 "method": "bdev_set_options", 00:17:44.469 "params": { 00:17:44.469 "bdev_io_pool_size": 65535, 00:17:44.469 "bdev_io_cache_size": 256, 00:17:44.469 "bdev_auto_examine": true, 00:17:44.469 "iobuf_small_cache_size": 128, 00:17:44.469 "iobuf_large_cache_size": 16 00:17:44.469 } 00:17:44.469 }, 00:17:44.469 { 00:17:44.469 "method": "bdev_raid_set_options", 00:17:44.469 "params": { 00:17:44.469 "process_window_size_kb": 1024, 00:17:44.469 "process_max_bandwidth_mb_sec": 0 00:17:44.469 } 00:17:44.469 }, 00:17:44.469 { 00:17:44.469 "method": "bdev_iscsi_set_options", 00:17:44.469 "params": { 00:17:44.469 "timeout_sec": 30 00:17:44.469 } 00:17:44.469 }, 00:17:44.469 { 00:17:44.469 "method": "bdev_nvme_set_options", 00:17:44.469 "params": { 00:17:44.469 "action_on_timeout": "none", 00:17:44.469 "timeout_us": 0, 00:17:44.469 "timeout_admin_us": 0, 00:17:44.469 "keep_alive_timeout_ms": 10000, 00:17:44.469 "arbitration_burst": 0, 00:17:44.469 "low_priority_weight": 0, 00:17:44.469 "medium_priority_weight": 0, 00:17:44.469 "high_priority_weight": 0, 00:17:44.469 "nvme_adminq_poll_period_us": 10000, 00:17:44.469 "nvme_ioq_poll_period_us": 0, 00:17:44.469 "io_queue_requests": 0, 00:17:44.469 "delay_cmd_submit": true, 00:17:44.469 "transport_retry_count": 4, 00:17:44.469 "bdev_retry_count": 3, 00:17:44.469 "transport_ack_timeout": 0, 00:17:44.469 "ctrlr_loss_timeout_sec": 0, 00:17:44.469 "reconnect_delay_sec": 0, 00:17:44.469 "fast_io_fail_timeout_sec": 0, 00:17:44.469 "disable_auto_failback": false, 00:17:44.469 "generate_uuids": false, 00:17:44.469 "transport_tos": 0, 00:17:44.469 "nvme_error_stat": false, 00:17:44.469 "rdma_srq_size": 0, 00:17:44.469 "io_path_stat": false, 00:17:44.469 "allow_accel_sequence": false, 00:17:44.469 "rdma_max_cq_size": 0, 00:17:44.469 "rdma_cm_event_timeout_ms": 0, 00:17:44.469 "dhchap_digests": [ 00:17:44.469 "sha256", 00:17:44.469 "sha384", 00:17:44.469 "sha512" 00:17:44.469 ], 00:17:44.469 "dhchap_dhgroups": [ 00:17:44.469 "null", 00:17:44.469 "ffdhe2048", 00:17:44.469 "ffdhe3072", 00:17:44.469 "ffdhe4096", 00:17:44.469 "ffdhe6144", 00:17:44.469 "ffdhe8192" 00:17:44.469 ] 00:17:44.469 } 00:17:44.469 }, 00:17:44.469 { 00:17:44.469 "method": "bdev_nvme_set_hotplug", 00:17:44.469 "params": { 00:17:44.469 "period_us": 100000, 00:17:44.469 "enable": false 00:17:44.469 } 00:17:44.469 }, 00:17:44.469 { 00:17:44.469 "method": "bdev_malloc_create", 00:17:44.469 "params": { 00:17:44.469 "name": "malloc0", 00:17:44.469 "num_blocks": 8192, 00:17:44.469 "block_size": 4096, 00:17:44.469 "physical_block_size": 4096, 00:17:44.469 "uuid": "4632a882-ed2d-4982-95a8-6d3a05e94e70", 00:17:44.469 "optimal_io_boundary": 0, 00:17:44.469 "md_size": 0, 00:17:44.469 "dif_type": 0, 00:17:44.469 "dif_is_head_of_md": false, 00:17:44.469 "dif_pi_format": 0 00:17:44.469 } 00:17:44.469 }, 00:17:44.469 { 00:17:44.469 "method": "bdev_wait_for_examine" 00:17:44.469 } 00:17:44.469 ] 00:17:44.469 }, 00:17:44.469 { 00:17:44.469 "subsystem": "nbd", 00:17:44.469 "config": [] 00:17:44.469 }, 00:17:44.469 { 00:17:44.469 "subsystem": "scheduler", 00:17:44.469 "config": [ 00:17:44.469 { 00:17:44.469 "method": "framework_set_scheduler", 00:17:44.469 "params": { 00:17:44.469 "name": "static" 00:17:44.469 } 00:17:44.469 } 00:17:44.469 ] 00:17:44.469 }, 00:17:44.469 { 00:17:44.469 "subsystem": "nvmf", 00:17:44.469 "config": [ 00:17:44.469 { 00:17:44.469 "method": "nvmf_set_config", 00:17:44.469 "params": { 00:17:44.469 "discovery_filter": "match_any", 00:17:44.469 "admin_cmd_passthru": { 00:17:44.469 "identify_ctrlr": false 00:17:44.469 }, 00:17:44.469 "dhchap_digests": [ 00:17:44.469 "sha256", 00:17:44.469 "sha384", 00:17:44.469 "sha512" 00:17:44.469 ], 00:17:44.469 "dhchap_dhgroups": [ 00:17:44.469 "null", 00:17:44.469 "ffdhe2048", 00:17:44.469 "ffdhe3072", 00:17:44.469 "ffdhe4096", 00:17:44.469 "ffdhe6144", 00:17:44.469 "ffdhe8192" 00:17:44.469 ] 00:17:44.469 } 00:17:44.469 }, 00:17:44.469 { 00:17:44.469 "method": "nvmf_set_max_subsystems", 00:17:44.469 "params": { 00:17:44.469 "max_subsystems": 1024 00:17:44.469 } 00:17:44.469 }, 00:17:44.469 { 00:17:44.469 "method": "nvmf_set_crdt", 00:17:44.469 "params": { 00:17:44.469 "crdt1": 0, 00:17:44.469 "crdt2": 0, 00:17:44.469 "crdt3": 0 00:17:44.469 } 00:17:44.469 }, 00:17:44.469 { 00:17:44.469 "method": "nvmf_create_transport", 00:17:44.469 "params": { 00:17:44.469 "trtype": "TCP", 00:17:44.469 "max_queue_depth": 128, 00:17:44.469 "max_io_qpairs_per_ctrlr": 127, 00:17:44.469 "in_capsule_data_size": 4096, 00:17:44.469 "max_io_size": 131072, 00:17:44.469 "io_unit_size": 131072, 00:17:44.469 "max_aq_depth": 128, 00:17:44.469 "num_shared_buffers": 511, 00:17:44.469 "buf_cache_size": 4294967295, 00:17:44.469 "dif_insert_or_strip": false, 00:17:44.469 "zcopy": false, 00:17:44.469 "c2h_success": false, 00:17:44.469 "sock_priority": 0, 00:17:44.469 "abort_timeout_sec": 1, 00:17:44.469 "ack_timeout": 0, 00:17:44.469 "data_wr_pool_size": 0 00:17:44.469 } 00:17:44.469 }, 00:17:44.469 { 00:17:44.469 "method": "nvmf_create_subsystem", 00:17:44.469 "params": { 00:17:44.469 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.469 "allow_any_host": false, 00:17:44.469 "serial_number": "SPDK00000000000001", 00:17:44.469 "model_number": "SPDK bdev Controller", 00:17:44.469 "max_namespaces": 10, 00:17:44.469 "min_cntlid": 1, 00:17:44.469 "max_cntlid": 65519, 00:17:44.469 "ana_reporting": false 00:17:44.469 } 00:17:44.469 }, 00:17:44.469 { 00:17:44.469 "method": "nvmf_subsystem_add_host", 00:17:44.469 "params": { 00:17:44.469 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.469 "host": "nqn.2016-06.io.spdk:host1", 00:17:44.469 "psk": "key0" 00:17:44.469 } 00:17:44.469 }, 00:17:44.469 { 00:17:44.469 "method": "nvmf_subsystem_add_ns", 00:17:44.469 "params": { 00:17:44.469 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.469 "namespace": { 00:17:44.469 "nsid": 1, 00:17:44.469 "bdev_name": "malloc0", 00:17:44.469 "nguid": "4632A882ED2D498295A86D3A05E94E70", 00:17:44.469 "uuid": "4632a882-ed2d-4982-95a8-6d3a05e94e70", 00:17:44.469 "no_auto_visible": false 00:17:44.469 } 00:17:44.469 } 00:17:44.469 }, 00:17:44.469 { 00:17:44.469 "method": "nvmf_subsystem_add_listener", 00:17:44.469 "params": { 00:17:44.469 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.469 "listen_address": { 00:17:44.469 "trtype": "TCP", 00:17:44.469 "adrfam": "IPv4", 00:17:44.469 "traddr": "10.0.0.2", 00:17:44.469 "trsvcid": "4420" 00:17:44.469 }, 00:17:44.469 "secure_channel": true 00:17:44.469 } 00:17:44.469 } 00:17:44.469 ] 00:17:44.469 } 00:17:44.469 ] 00:17:44.469 }' 00:17:44.470 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.470 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3600850 00:17:44.470 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:44.470 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3600850 00:17:44.470 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3600850 ']' 00:17:44.470 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.470 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.470 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.470 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.470 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.470 [2024-12-09 05:12:21.030160] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:17:44.470 [2024-12-09 05:12:21.030209] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.470 [2024-12-09 05:12:21.098151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.728 [2024-12-09 05:12:21.137685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.728 [2024-12-09 05:12:21.137716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.728 [2024-12-09 05:12:21.137723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.728 [2024-12-09 05:12:21.137730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.728 [2024-12-09 05:12:21.137735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.728 [2024-12-09 05:12:21.138348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.728 [2024-12-09 05:12:21.351022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.985 [2024-12-09 05:12:21.383040] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:44.985 [2024-12-09 05:12:21.383269] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.245 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.245 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:45.245 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:45.245 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:45.245 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.245 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.245 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3600880 00:17:45.245 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3600880 /var/tmp/bdevperf.sock 00:17:45.245 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3600880 ']' 00:17:45.245 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:45.245 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:45.245 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.245 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:45.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:45.245 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:17:45.245 "subsystems": [ 00:17:45.245 { 00:17:45.245 "subsystem": "keyring", 00:17:45.245 "config": [ 00:17:45.245 { 00:17:45.245 "method": "keyring_file_add_key", 00:17:45.245 "params": { 00:17:45.245 "name": "key0", 00:17:45.245 "path": "/tmp/tmp.gMWyOCJQJN" 00:17:45.245 } 00:17:45.245 } 00:17:45.245 ] 00:17:45.245 }, 00:17:45.245 { 00:17:45.245 "subsystem": "iobuf", 00:17:45.245 "config": [ 00:17:45.245 { 00:17:45.245 "method": "iobuf_set_options", 00:17:45.245 "params": { 00:17:45.245 "small_pool_count": 8192, 00:17:45.245 "large_pool_count": 1024, 00:17:45.245 "small_bufsize": 8192, 00:17:45.245 "large_bufsize": 135168, 00:17:45.245 "enable_numa": false 00:17:45.245 } 00:17:45.245 } 00:17:45.245 ] 00:17:45.245 }, 00:17:45.245 { 00:17:45.245 "subsystem": "sock", 00:17:45.245 "config": [ 00:17:45.245 { 00:17:45.245 "method": "sock_set_default_impl", 00:17:45.245 "params": { 00:17:45.245 "impl_name": "posix" 00:17:45.245 } 00:17:45.245 }, 00:17:45.245 { 00:17:45.245 "method": "sock_impl_set_options", 00:17:45.245 "params": { 00:17:45.245 "impl_name": "ssl", 00:17:45.245 "recv_buf_size": 4096, 00:17:45.245 "send_buf_size": 4096, 00:17:45.245 "enable_recv_pipe": true, 00:17:45.245 "enable_quickack": false, 00:17:45.245 "enable_placement_id": 0, 00:17:45.245 "enable_zerocopy_send_server": true, 00:17:45.245 "enable_zerocopy_send_client": false, 00:17:45.245 "zerocopy_threshold": 0, 00:17:45.245 "tls_version": 0, 00:17:45.245 "enable_ktls": false 00:17:45.245 } 00:17:45.245 }, 00:17:45.245 { 00:17:45.245 "method": "sock_impl_set_options", 00:17:45.245 "params": { 00:17:45.245 "impl_name": "posix", 00:17:45.245 "recv_buf_size": 2097152, 00:17:45.245 "send_buf_size": 2097152, 00:17:45.245 "enable_recv_pipe": true, 00:17:45.245 "enable_quickack": false, 00:17:45.245 "enable_placement_id": 0, 00:17:45.245 "enable_zerocopy_send_server": true, 00:17:45.245 "enable_zerocopy_send_client": false, 00:17:45.245 "zerocopy_threshold": 0, 00:17:45.245 "tls_version": 0, 00:17:45.245 "enable_ktls": false 00:17:45.245 } 00:17:45.245 } 00:17:45.245 ] 00:17:45.245 }, 00:17:45.245 { 00:17:45.245 "subsystem": "vmd", 00:17:45.245 "config": [] 00:17:45.245 }, 00:17:45.245 { 00:17:45.245 "subsystem": "accel", 00:17:45.245 "config": [ 00:17:45.245 { 00:17:45.245 "method": "accel_set_options", 00:17:45.245 "params": { 00:17:45.245 "small_cache_size": 128, 00:17:45.245 "large_cache_size": 16, 00:17:45.245 "task_count": 2048, 00:17:45.245 "sequence_count": 2048, 00:17:45.245 "buf_count": 2048 00:17:45.245 } 00:17:45.245 } 00:17:45.245 ] 00:17:45.245 }, 00:17:45.245 { 00:17:45.245 "subsystem": "bdev", 00:17:45.245 "config": [ 00:17:45.245 { 00:17:45.245 "method": "bdev_set_options", 00:17:45.245 "params": { 00:17:45.245 "bdev_io_pool_size": 65535, 00:17:45.245 "bdev_io_cache_size": 256, 00:17:45.245 "bdev_auto_examine": true, 00:17:45.245 "iobuf_small_cache_size": 128, 00:17:45.245 "iobuf_large_cache_size": 16 00:17:45.245 } 00:17:45.245 }, 00:17:45.245 { 00:17:45.245 "method": "bdev_raid_set_options", 00:17:45.245 "params": { 00:17:45.245 "process_window_size_kb": 1024, 00:17:45.246 "process_max_bandwidth_mb_sec": 0 00:17:45.246 } 00:17:45.246 }, 00:17:45.246 { 00:17:45.246 "method": "bdev_iscsi_set_options", 00:17:45.246 "params": { 00:17:45.246 "timeout_sec": 30 00:17:45.246 } 00:17:45.246 }, 00:17:45.246 { 00:17:45.246 "method": "bdev_nvme_set_options", 00:17:45.246 "params": { 00:17:45.246 "action_on_timeout": "none", 00:17:45.246 "timeout_us": 0, 00:17:45.246 "timeout_admin_us": 0, 00:17:45.246 "keep_alive_timeout_ms": 10000, 00:17:45.246 "arbitration_burst": 0, 00:17:45.246 "low_priority_weight": 0, 00:17:45.246 "medium_priority_weight": 0, 00:17:45.246 "high_priority_weight": 0, 00:17:45.246 "nvme_adminq_poll_period_us": 10000, 00:17:45.246 "nvme_ioq_poll_period_us": 0, 00:17:45.246 "io_queue_requests": 512, 00:17:45.246 "delay_cmd_submit": true, 00:17:45.246 "transport_retry_count": 4, 00:17:45.246 "bdev_retry_count": 3, 00:17:45.246 "transport_ack_timeout": 0, 00:17:45.246 "ctrlr_loss_timeout_sec": 0, 00:17:45.246 "reconnect_delay_sec": 0, 00:17:45.246 "fast_io_fail_timeout_sec": 0, 00:17:45.246 "disable_auto_failback": false, 00:17:45.246 "generate_uuids": false, 00:17:45.246 "transport_tos": 0, 00:17:45.246 "nvme_error_stat": false, 00:17:45.246 "rdma_srq_size": 0, 00:17:45.246 "io_path_stat": false, 00:17:45.246 "allow_accel_sequence": false, 00:17:45.246 "rdma_max_cq_size": 0, 00:17:45.246 "rdma_cm_event_timeout_ms": 0, 00:17:45.246 "dhchap_digests": [ 00:17:45.246 "sha256", 00:17:45.246 "sha384", 00:17:45.246 "sha512" 00:17:45.246 ], 00:17:45.246 "dhchap_dhgroups": [ 00:17:45.246 "null", 00:17:45.246 "ffdhe2048", 00:17:45.246 "ffdhe3072", 00:17:45.246 "ffdhe4096", 00:17:45.246 "ffdhe6144", 00:17:45.246 "ffdhe8192" 00:17:45.246 ] 00:17:45.246 } 00:17:45.246 }, 00:17:45.246 { 00:17:45.246 "method": "bdev_nvme_attach_controller", 00:17:45.246 "params": { 00:17:45.246 "name": "TLSTEST", 00:17:45.246 "trtype": "TCP", 00:17:45.246 "adrfam": "IPv4", 00:17:45.246 "traddr": "10.0.0.2", 00:17:45.246 "trsvcid": "4420", 00:17:45.246 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.246 "prchk_reftag": false, 00:17:45.246 "prchk_guard": false, 00:17:45.246 "ctrlr_loss_timeout_sec": 0, 00:17:45.246 "reconnect_delay_sec": 0, 00:17:45.246 "fast_io_fail_timeout_sec": 0, 00:17:45.246 "psk": "key0", 00:17:45.246 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:45.246 "hdgst": false, 00:17:45.246 "ddgst": false, 00:17:45.246 "multipath": "multipath" 00:17:45.246 } 00:17:45.246 }, 00:17:45.246 { 00:17:45.246 "method": "bdev_nvme_set_hotplug", 00:17:45.246 "params": { 00:17:45.246 "period_us": 100000, 00:17:45.246 "enable": false 00:17:45.246 } 00:17:45.246 }, 00:17:45.246 { 00:17:45.246 "method": "bdev_wait_for_examine" 00:17:45.246 } 00:17:45.246 ] 00:17:45.246 }, 00:17:45.246 { 00:17:45.246 "subsystem": "nbd", 00:17:45.246 "config": [] 00:17:45.246 } 00:17:45.246 ] 00:17:45.246 }' 00:17:45.246 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.246 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.505 [2024-12-09 05:12:21.926206] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:17:45.505 [2024-12-09 05:12:21.926255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3600880 ] 00:17:45.505 [2024-12-09 05:12:21.987949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.505 [2024-12-09 05:12:22.028752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.780 [2024-12-09 05:12:22.183379] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:46.346 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.346 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:46.347 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:46.347 Running I/O for 10 seconds... 00:17:48.213 5236.00 IOPS, 20.45 MiB/s [2024-12-09T04:12:26.235Z] 4753.00 IOPS, 18.57 MiB/s [2024-12-09T04:12:27.169Z] 4322.33 IOPS, 16.88 MiB/s [2024-12-09T04:12:28.104Z] 4113.75 IOPS, 16.07 MiB/s [2024-12-09T04:12:29.039Z] 3957.40 IOPS, 15.46 MiB/s [2024-12-09T04:12:29.989Z] 3867.50 IOPS, 15.11 MiB/s [2024-12-09T04:12:30.922Z] 3808.86 IOPS, 14.88 MiB/s [2024-12-09T04:12:32.296Z] 3765.62 IOPS, 14.71 MiB/s [2024-12-09T04:12:33.233Z] 3730.11 IOPS, 14.57 MiB/s [2024-12-09T04:12:33.233Z] 3700.70 IOPS, 14.46 MiB/s 00:17:56.587 Latency(us) 00:17:56.587 [2024-12-09T04:12:33.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.587 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:56.587 Verification LBA range: start 0x0 length 0x2000 00:17:56.587 TLSTESTn1 : 10.04 3699.96 14.45 0.00 0.00 34523.38 4986.43 48553.63 00:17:56.587 [2024-12-09T04:12:33.233Z] =================================================================================================================== 00:17:56.587 [2024-12-09T04:12:33.233Z] Total : 3699.96 14.45 0.00 0.00 34523.38 4986.43 48553.63 00:17:56.587 { 00:17:56.587 "results": [ 00:17:56.587 { 00:17:56.587 "job": "TLSTESTn1", 00:17:56.587 "core_mask": "0x4", 00:17:56.587 "workload": "verify", 00:17:56.587 "status": "finished", 00:17:56.587 "verify_range": { 00:17:56.587 "start": 0, 00:17:56.587 "length": 8192 00:17:56.587 }, 00:17:56.587 "queue_depth": 128, 00:17:56.587 "io_size": 4096, 00:17:56.587 "runtime": 10.036596, 00:17:56.587 "iops": 3699.9596277462997, 00:17:56.587 "mibps": 14.452967295883983, 00:17:56.587 "io_failed": 0, 00:17:56.587 "io_timeout": 0, 00:17:56.587 "avg_latency_us": 34523.37927200988, 00:17:56.587 "min_latency_us": 4986.434782608696, 00:17:56.587 "max_latency_us": 48553.62782608696 00:17:56.587 } 00:17:56.587 ], 00:17:56.587 "core_count": 1 00:17:56.587 } 00:17:56.587 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:56.587 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3600880 00:17:56.587 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3600880 ']' 00:17:56.587 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3600880 00:17:56.587 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:56.587 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.587 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3600880 00:17:56.587 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:56.587 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:56.587 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3600880' 00:17:56.587 killing process with pid 3600880 00:17:56.587 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3600880 00:17:56.587 Received shutdown signal, test time was about 10.000000 seconds 00:17:56.587 00:17:56.587 Latency(us) 00:17:56.587 [2024-12-09T04:12:33.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.587 [2024-12-09T04:12:33.233Z] =================================================================================================================== 00:17:56.587 [2024-12-09T04:12:33.233Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:56.587 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3600880 00:17:56.588 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3600850 00:17:56.588 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3600850 ']' 00:17:56.588 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3600850 00:17:56.588 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:56.588 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.588 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3600850 00:17:56.588 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:56.588 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:56.588 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3600850' 00:17:56.588 killing process with pid 3600850 00:17:56.588 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3600850 00:17:56.588 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3600850 00:17:56.847 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:17:56.847 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:56.847 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:56.847 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.847 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3602797 00:17:56.847 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3602797 00:17:56.847 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:56.847 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3602797 ']' 00:17:56.847 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.847 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.847 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.847 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.847 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.847 [2024-12-09 05:12:33.482025] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:17:56.847 [2024-12-09 05:12:33.482074] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.106 [2024-12-09 05:12:33.551968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.106 [2024-12-09 05:12:33.594527] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.106 [2024-12-09 05:12:33.594562] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.106 [2024-12-09 05:12:33.594570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.106 [2024-12-09 05:12:33.594576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.106 [2024-12-09 05:12:33.594582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.106 [2024-12-09 05:12:33.595153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.106 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.106 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:57.106 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:57.106 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:57.106 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.106 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.106 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.gMWyOCJQJN 00:17:57.106 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gMWyOCJQJN 00:17:57.106 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:57.364 [2024-12-09 05:12:33.896302] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.364 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:57.623 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:57.882 [2024-12-09 05:12:34.269272] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:57.882 [2024-12-09 05:12:34.269488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.882 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:57.882 malloc0 00:17:57.882 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:58.140 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gMWyOCJQJN 00:17:58.399 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:58.399 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3603179 00:17:58.399 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:58.399 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:58.399 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3603179 /var/tmp/bdevperf.sock 00:17:58.399 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3603179 ']' 00:17:58.399 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.399 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.664 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.664 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.664 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.664 [2024-12-09 05:12:35.085561] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:17:58.664 [2024-12-09 05:12:35.085614] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603179 ] 00:17:58.664 [2024-12-09 05:12:35.150527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.664 [2024-12-09 05:12:35.191208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.664 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.664 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:58.664 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gMWyOCJQJN 00:17:58.922 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:59.181 [2024-12-09 05:12:35.623832] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:59.181 nvme0n1 00:17:59.181 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:59.181 Running I/O for 1 seconds... 00:18:00.560 5329.00 IOPS, 20.82 MiB/s 00:18:00.560 Latency(us) 00:18:00.560 [2024-12-09T04:12:37.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.560 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:00.560 Verification LBA range: start 0x0 length 0x2000 00:18:00.560 nvme0n1 : 1.03 5274.95 20.61 0.00 0.00 23908.34 5100.41 34648.60 00:18:00.560 [2024-12-09T04:12:37.206Z] =================================================================================================================== 00:18:00.560 [2024-12-09T04:12:37.206Z] Total : 5274.95 20.61 0.00 0.00 23908.34 5100.41 34648.60 00:18:00.560 { 00:18:00.560 "results": [ 00:18:00.560 { 00:18:00.560 "job": "nvme0n1", 00:18:00.560 "core_mask": "0x2", 00:18:00.560 "workload": "verify", 00:18:00.560 "status": "finished", 00:18:00.560 "verify_range": { 00:18:00.560 "start": 0, 00:18:00.560 "length": 8192 00:18:00.560 }, 00:18:00.560 "queue_depth": 128, 00:18:00.560 "io_size": 4096, 00:18:00.560 "runtime": 1.034512, 00:18:00.560 "iops": 5274.950894721376, 00:18:00.560 "mibps": 20.605276932505376, 00:18:00.560 "io_failed": 0, 00:18:00.560 "io_timeout": 0, 00:18:00.560 "avg_latency_us": 23908.343286564523, 00:18:00.560 "min_latency_us": 5100.410434782609, 00:18:00.560 "max_latency_us": 34648.59826086956 00:18:00.560 } 00:18:00.560 ], 00:18:00.560 "core_count": 1 00:18:00.560 } 00:18:00.560 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3603179 00:18:00.560 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3603179 ']' 00:18:00.560 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3603179 00:18:00.560 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:00.560 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.560 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3603179 00:18:00.560 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:00.560 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:00.560 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3603179' 00:18:00.560 killing process with pid 3603179 00:18:00.560 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3603179 00:18:00.560 Received shutdown signal, test time was about 1.000000 seconds 00:18:00.560 00:18:00.560 Latency(us) 00:18:00.560 [2024-12-09T04:12:37.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.560 [2024-12-09T04:12:37.206Z] =================================================================================================================== 00:18:00.560 [2024-12-09T04:12:37.206Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:00.560 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3603179 00:18:00.560 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3602797 00:18:00.560 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3602797 ']' 00:18:00.560 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3602797 00:18:00.560 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:00.560 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.560 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3602797 00:18:00.560 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:00.560 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:00.560 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3602797' 00:18:00.560 killing process with pid 3602797 00:18:00.561 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3602797 00:18:00.561 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3602797 00:18:00.820 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:00.820 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:00.820 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.820 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.820 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3603453 00:18:00.820 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:00.820 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3603453 00:18:00.820 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3603453 ']' 00:18:00.820 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.820 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.820 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.820 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.820 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.820 [2024-12-09 05:12:37.407214] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:18:00.820 [2024-12-09 05:12:37.407260] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.080 [2024-12-09 05:12:37.473946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.080 [2024-12-09 05:12:37.514924] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.080 [2024-12-09 05:12:37.514965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.080 [2024-12-09 05:12:37.514972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.080 [2024-12-09 05:12:37.514979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.080 [2024-12-09 05:12:37.514984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.080 [2024-12-09 05:12:37.515586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.080 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.080 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:01.080 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:01.080 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:01.080 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.080 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.080 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:01.080 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.080 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.080 [2024-12-09 05:12:37.648046] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.080 malloc0 00:18:01.080 [2024-12-09 05:12:37.676155] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:01.080 [2024-12-09 05:12:37.676375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.080 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.080 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3603588 00:18:01.080 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:01.080 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3603588 /var/tmp/bdevperf.sock 00:18:01.080 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3603588 ']' 00:18:01.080 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.080 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.080 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.080 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.080 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.340 [2024-12-09 05:12:37.734804] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:18:01.340 [2024-12-09 05:12:37.734844] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603588 ] 00:18:01.340 [2024-12-09 05:12:37.798718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.340 [2024-12-09 05:12:37.842448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.340 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.340 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:01.340 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gMWyOCJQJN 00:18:01.599 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:01.857 [2024-12-09 05:12:38.300534] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:01.857 nvme0n1 00:18:01.857 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:01.857 Running I/O for 1 seconds... 00:18:03.236 5165.00 IOPS, 20.18 MiB/s 00:18:03.236 Latency(us) 00:18:03.236 [2024-12-09T04:12:39.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.236 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:03.236 Verification LBA range: start 0x0 length 0x2000 00:18:03.236 nvme0n1 : 1.02 5202.28 20.32 0.00 0.00 24406.60 6297.15 26442.35 00:18:03.236 [2024-12-09T04:12:39.882Z] =================================================================================================================== 00:18:03.236 [2024-12-09T04:12:39.882Z] Total : 5202.28 20.32 0.00 0.00 24406.60 6297.15 26442.35 00:18:03.236 { 00:18:03.236 "results": [ 00:18:03.236 { 00:18:03.236 "job": "nvme0n1", 00:18:03.236 "core_mask": "0x2", 00:18:03.236 "workload": "verify", 00:18:03.236 "status": "finished", 00:18:03.236 "verify_range": { 00:18:03.236 "start": 0, 00:18:03.236 "length": 8192 00:18:03.236 }, 00:18:03.236 "queue_depth": 128, 00:18:03.236 "io_size": 4096, 00:18:03.236 "runtime": 1.017438, 00:18:03.236 "iops": 5202.282596089393, 00:18:03.236 "mibps": 20.321416390974193, 00:18:03.236 "io_failed": 0, 00:18:03.236 "io_timeout": 0, 00:18:03.236 "avg_latency_us": 24406.59768455466, 00:18:03.236 "min_latency_us": 6297.154782608695, 00:18:03.236 "max_latency_us": 26442.351304347827 00:18:03.236 } 00:18:03.236 ], 00:18:03.236 "core_count": 1 00:18:03.236 } 00:18:03.236 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:03.236 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.236 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.236 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.236 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:03.236 "subsystems": [ 00:18:03.236 { 00:18:03.236 "subsystem": "keyring", 00:18:03.236 "config": [ 00:18:03.236 { 00:18:03.236 "method": "keyring_file_add_key", 00:18:03.236 "params": { 00:18:03.236 "name": "key0", 00:18:03.236 "path": "/tmp/tmp.gMWyOCJQJN" 00:18:03.236 } 00:18:03.236 } 00:18:03.236 ] 00:18:03.236 }, 00:18:03.236 { 00:18:03.236 "subsystem": "iobuf", 00:18:03.236 "config": [ 00:18:03.236 { 00:18:03.236 "method": "iobuf_set_options", 00:18:03.236 "params": { 00:18:03.236 "small_pool_count": 8192, 00:18:03.236 "large_pool_count": 1024, 00:18:03.236 "small_bufsize": 8192, 00:18:03.236 "large_bufsize": 135168, 00:18:03.236 "enable_numa": false 00:18:03.236 } 00:18:03.236 } 00:18:03.236 ] 00:18:03.236 }, 00:18:03.236 { 00:18:03.236 "subsystem": "sock", 00:18:03.236 "config": [ 00:18:03.236 { 00:18:03.236 "method": "sock_set_default_impl", 00:18:03.236 "params": { 00:18:03.236 "impl_name": "posix" 00:18:03.236 } 00:18:03.236 }, 00:18:03.236 { 00:18:03.236 "method": "sock_impl_set_options", 00:18:03.236 "params": { 00:18:03.236 "impl_name": "ssl", 00:18:03.236 "recv_buf_size": 4096, 00:18:03.236 "send_buf_size": 4096, 00:18:03.236 "enable_recv_pipe": true, 00:18:03.236 "enable_quickack": false, 00:18:03.236 "enable_placement_id": 0, 00:18:03.236 "enable_zerocopy_send_server": true, 00:18:03.236 "enable_zerocopy_send_client": false, 00:18:03.236 "zerocopy_threshold": 0, 00:18:03.236 "tls_version": 0, 00:18:03.236 "enable_ktls": false 00:18:03.236 } 00:18:03.236 }, 00:18:03.236 { 00:18:03.236 "method": "sock_impl_set_options", 00:18:03.236 "params": { 00:18:03.236 "impl_name": "posix", 00:18:03.236 "recv_buf_size": 2097152, 00:18:03.236 "send_buf_size": 2097152, 00:18:03.236 "enable_recv_pipe": true, 00:18:03.236 "enable_quickack": false, 00:18:03.236 "enable_placement_id": 0, 00:18:03.236 "enable_zerocopy_send_server": true, 00:18:03.236 "enable_zerocopy_send_client": false, 00:18:03.236 "zerocopy_threshold": 0, 00:18:03.236 "tls_version": 0, 00:18:03.236 "enable_ktls": false 00:18:03.236 } 00:18:03.236 } 00:18:03.236 ] 00:18:03.236 }, 00:18:03.236 { 00:18:03.236 "subsystem": "vmd", 00:18:03.236 "config": [] 00:18:03.236 }, 00:18:03.236 { 00:18:03.236 "subsystem": "accel", 00:18:03.236 "config": [ 00:18:03.236 { 00:18:03.236 "method": "accel_set_options", 00:18:03.236 "params": { 00:18:03.236 "small_cache_size": 128, 00:18:03.236 "large_cache_size": 16, 00:18:03.236 "task_count": 2048, 00:18:03.236 "sequence_count": 2048, 00:18:03.236 "buf_count": 2048 00:18:03.236 } 00:18:03.236 } 00:18:03.236 ] 00:18:03.236 }, 00:18:03.236 { 00:18:03.236 "subsystem": "bdev", 00:18:03.236 "config": [ 00:18:03.236 { 00:18:03.236 "method": "bdev_set_options", 00:18:03.236 "params": { 00:18:03.236 "bdev_io_pool_size": 65535, 00:18:03.236 "bdev_io_cache_size": 256, 00:18:03.236 "bdev_auto_examine": true, 00:18:03.236 "iobuf_small_cache_size": 128, 00:18:03.236 "iobuf_large_cache_size": 16 00:18:03.236 } 00:18:03.236 }, 00:18:03.236 { 00:18:03.236 "method": "bdev_raid_set_options", 00:18:03.236 "params": { 00:18:03.236 "process_window_size_kb": 1024, 00:18:03.236 "process_max_bandwidth_mb_sec": 0 00:18:03.236 } 00:18:03.236 }, 00:18:03.236 { 00:18:03.236 "method": "bdev_iscsi_set_options", 00:18:03.236 "params": { 00:18:03.236 "timeout_sec": 30 00:18:03.236 } 00:18:03.236 }, 00:18:03.236 { 00:18:03.236 "method": "bdev_nvme_set_options", 00:18:03.236 "params": { 00:18:03.236 "action_on_timeout": "none", 00:18:03.236 "timeout_us": 0, 00:18:03.236 "timeout_admin_us": 0, 00:18:03.236 "keep_alive_timeout_ms": 10000, 00:18:03.236 "arbitration_burst": 0, 00:18:03.236 "low_priority_weight": 0, 00:18:03.236 "medium_priority_weight": 0, 00:18:03.236 "high_priority_weight": 0, 00:18:03.236 "nvme_adminq_poll_period_us": 10000, 00:18:03.236 "nvme_ioq_poll_period_us": 0, 00:18:03.236 "io_queue_requests": 0, 00:18:03.236 "delay_cmd_submit": true, 00:18:03.236 "transport_retry_count": 4, 00:18:03.236 "bdev_retry_count": 3, 00:18:03.236 "transport_ack_timeout": 0, 00:18:03.236 "ctrlr_loss_timeout_sec": 0, 00:18:03.236 "reconnect_delay_sec": 0, 00:18:03.236 "fast_io_fail_timeout_sec": 0, 00:18:03.236 "disable_auto_failback": false, 00:18:03.236 "generate_uuids": false, 00:18:03.236 "transport_tos": 0, 00:18:03.236 "nvme_error_stat": false, 00:18:03.236 "rdma_srq_size": 0, 00:18:03.236 "io_path_stat": false, 00:18:03.236 "allow_accel_sequence": false, 00:18:03.236 "rdma_max_cq_size": 0, 00:18:03.236 "rdma_cm_event_timeout_ms": 0, 00:18:03.236 "dhchap_digests": [ 00:18:03.236 "sha256", 00:18:03.236 "sha384", 00:18:03.236 "sha512" 00:18:03.236 ], 00:18:03.236 "dhchap_dhgroups": [ 00:18:03.236 "null", 00:18:03.236 "ffdhe2048", 00:18:03.236 "ffdhe3072", 00:18:03.236 "ffdhe4096", 00:18:03.236 "ffdhe6144", 00:18:03.236 "ffdhe8192" 00:18:03.236 ] 00:18:03.236 } 00:18:03.236 }, 00:18:03.236 { 00:18:03.236 "method": "bdev_nvme_set_hotplug", 00:18:03.236 "params": { 00:18:03.236 "period_us": 100000, 00:18:03.236 "enable": false 00:18:03.236 } 00:18:03.236 }, 00:18:03.236 { 00:18:03.236 "method": "bdev_malloc_create", 00:18:03.236 "params": { 00:18:03.236 "name": "malloc0", 00:18:03.236 "num_blocks": 8192, 00:18:03.236 "block_size": 4096, 00:18:03.236 "physical_block_size": 4096, 00:18:03.236 "uuid": "b77cb4ec-e1cb-4620-bf87-b4f221be8845", 00:18:03.236 "optimal_io_boundary": 0, 00:18:03.236 "md_size": 0, 00:18:03.236 "dif_type": 0, 00:18:03.236 "dif_is_head_of_md": false, 00:18:03.236 "dif_pi_format": 0 00:18:03.236 } 00:18:03.236 }, 00:18:03.236 { 00:18:03.236 "method": "bdev_wait_for_examine" 00:18:03.236 } 00:18:03.236 ] 00:18:03.236 }, 00:18:03.236 { 00:18:03.236 "subsystem": "nbd", 00:18:03.236 "config": [] 00:18:03.236 }, 00:18:03.236 { 00:18:03.236 "subsystem": "scheduler", 00:18:03.236 "config": [ 00:18:03.236 { 00:18:03.236 "method": "framework_set_scheduler", 00:18:03.236 "params": { 00:18:03.236 "name": "static" 00:18:03.236 } 00:18:03.236 } 00:18:03.236 ] 00:18:03.236 }, 00:18:03.236 { 00:18:03.236 "subsystem": "nvmf", 00:18:03.236 "config": [ 00:18:03.236 { 00:18:03.236 "method": "nvmf_set_config", 00:18:03.236 "params": { 00:18:03.236 "discovery_filter": "match_any", 00:18:03.236 "admin_cmd_passthru": { 00:18:03.236 "identify_ctrlr": false 00:18:03.236 }, 00:18:03.237 "dhchap_digests": [ 00:18:03.237 "sha256", 00:18:03.237 "sha384", 00:18:03.237 "sha512" 00:18:03.237 ], 00:18:03.237 "dhchap_dhgroups": [ 00:18:03.237 "null", 00:18:03.237 "ffdhe2048", 00:18:03.237 "ffdhe3072", 00:18:03.237 "ffdhe4096", 00:18:03.237 "ffdhe6144", 00:18:03.237 "ffdhe8192" 00:18:03.237 ] 00:18:03.237 } 00:18:03.237 }, 00:18:03.237 { 00:18:03.237 "method": "nvmf_set_max_subsystems", 00:18:03.237 "params": { 00:18:03.237 "max_subsystems": 1024 00:18:03.237 } 00:18:03.237 }, 00:18:03.237 { 00:18:03.237 "method": "nvmf_set_crdt", 00:18:03.237 "params": { 00:18:03.237 "crdt1": 0, 00:18:03.237 "crdt2": 0, 00:18:03.237 "crdt3": 0 00:18:03.237 } 00:18:03.237 }, 00:18:03.237 { 00:18:03.237 "method": "nvmf_create_transport", 00:18:03.237 "params": { 00:18:03.237 "trtype": "TCP", 00:18:03.237 "max_queue_depth": 128, 00:18:03.237 "max_io_qpairs_per_ctrlr": 127, 00:18:03.237 "in_capsule_data_size": 4096, 00:18:03.237 "max_io_size": 131072, 00:18:03.237 "io_unit_size": 131072, 00:18:03.237 "max_aq_depth": 128, 00:18:03.237 "num_shared_buffers": 511, 00:18:03.237 "buf_cache_size": 4294967295, 00:18:03.237 "dif_insert_or_strip": false, 00:18:03.237 "zcopy": false, 00:18:03.237 "c2h_success": false, 00:18:03.237 "sock_priority": 0, 00:18:03.237 "abort_timeout_sec": 1, 00:18:03.237 "ack_timeout": 0, 00:18:03.237 "data_wr_pool_size": 0 00:18:03.237 } 00:18:03.237 }, 00:18:03.237 { 00:18:03.237 "method": "nvmf_create_subsystem", 00:18:03.237 "params": { 00:18:03.237 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.237 "allow_any_host": false, 00:18:03.237 "serial_number": "00000000000000000000", 00:18:03.237 "model_number": "SPDK bdev Controller", 00:18:03.237 "max_namespaces": 32, 00:18:03.237 "min_cntlid": 1, 00:18:03.237 "max_cntlid": 65519, 00:18:03.237 "ana_reporting": false 00:18:03.237 } 00:18:03.237 }, 00:18:03.237 { 00:18:03.237 "method": "nvmf_subsystem_add_host", 00:18:03.237 "params": { 00:18:03.237 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.237 "host": "nqn.2016-06.io.spdk:host1", 00:18:03.237 "psk": "key0" 00:18:03.237 } 00:18:03.237 }, 00:18:03.237 { 00:18:03.237 "method": "nvmf_subsystem_add_ns", 00:18:03.237 "params": { 00:18:03.237 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.237 "namespace": { 00:18:03.237 "nsid": 1, 00:18:03.237 "bdev_name": "malloc0", 00:18:03.237 "nguid": "B77CB4ECE1CB4620BF87B4F221BE8845", 00:18:03.237 "uuid": "b77cb4ec-e1cb-4620-bf87-b4f221be8845", 00:18:03.237 "no_auto_visible": false 00:18:03.237 } 00:18:03.237 } 00:18:03.237 }, 00:18:03.237 { 00:18:03.237 "method": "nvmf_subsystem_add_listener", 00:18:03.237 "params": { 00:18:03.237 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.237 "listen_address": { 00:18:03.237 "trtype": "TCP", 00:18:03.237 "adrfam": "IPv4", 00:18:03.237 "traddr": "10.0.0.2", 00:18:03.237 "trsvcid": "4420" 00:18:03.237 }, 00:18:03.237 "secure_channel": false, 00:18:03.237 "sock_impl": "ssl" 00:18:03.237 } 00:18:03.237 } 00:18:03.237 ] 00:18:03.237 } 00:18:03.237 ] 00:18:03.237 }' 00:18:03.237 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:03.496 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:03.496 "subsystems": [ 00:18:03.496 { 00:18:03.496 "subsystem": "keyring", 00:18:03.496 "config": [ 00:18:03.496 { 00:18:03.496 "method": "keyring_file_add_key", 00:18:03.496 "params": { 00:18:03.496 "name": "key0", 00:18:03.496 "path": "/tmp/tmp.gMWyOCJQJN" 00:18:03.496 } 00:18:03.496 } 00:18:03.496 ] 00:18:03.496 }, 00:18:03.496 { 00:18:03.497 "subsystem": "iobuf", 00:18:03.497 "config": [ 00:18:03.497 { 00:18:03.497 "method": "iobuf_set_options", 00:18:03.497 "params": { 00:18:03.497 "small_pool_count": 8192, 00:18:03.497 "large_pool_count": 1024, 00:18:03.497 "small_bufsize": 8192, 00:18:03.497 "large_bufsize": 135168, 00:18:03.497 "enable_numa": false 00:18:03.497 } 00:18:03.497 } 00:18:03.497 ] 00:18:03.497 }, 00:18:03.497 { 00:18:03.497 "subsystem": "sock", 00:18:03.497 "config": [ 00:18:03.497 { 00:18:03.497 "method": "sock_set_default_impl", 00:18:03.497 "params": { 00:18:03.497 "impl_name": "posix" 00:18:03.497 } 00:18:03.497 }, 00:18:03.497 { 00:18:03.497 "method": "sock_impl_set_options", 00:18:03.497 "params": { 00:18:03.497 "impl_name": "ssl", 00:18:03.497 "recv_buf_size": 4096, 00:18:03.497 "send_buf_size": 4096, 00:18:03.497 "enable_recv_pipe": true, 00:18:03.497 "enable_quickack": false, 00:18:03.497 "enable_placement_id": 0, 00:18:03.497 "enable_zerocopy_send_server": true, 00:18:03.497 "enable_zerocopy_send_client": false, 00:18:03.497 "zerocopy_threshold": 0, 00:18:03.497 "tls_version": 0, 00:18:03.497 "enable_ktls": false 00:18:03.497 } 00:18:03.497 }, 00:18:03.497 { 00:18:03.497 "method": "sock_impl_set_options", 00:18:03.497 "params": { 00:18:03.497 "impl_name": "posix", 00:18:03.497 "recv_buf_size": 2097152, 00:18:03.497 "send_buf_size": 2097152, 00:18:03.497 "enable_recv_pipe": true, 00:18:03.497 "enable_quickack": false, 00:18:03.497 "enable_placement_id": 0, 00:18:03.497 "enable_zerocopy_send_server": true, 00:18:03.497 "enable_zerocopy_send_client": false, 00:18:03.497 "zerocopy_threshold": 0, 00:18:03.497 "tls_version": 0, 00:18:03.497 "enable_ktls": false 00:18:03.497 } 00:18:03.497 } 00:18:03.497 ] 00:18:03.497 }, 00:18:03.497 { 00:18:03.497 "subsystem": "vmd", 00:18:03.497 "config": [] 00:18:03.497 }, 00:18:03.497 { 00:18:03.497 "subsystem": "accel", 00:18:03.497 "config": [ 00:18:03.497 { 00:18:03.497 "method": "accel_set_options", 00:18:03.497 "params": { 00:18:03.497 "small_cache_size": 128, 00:18:03.497 "large_cache_size": 16, 00:18:03.497 "task_count": 2048, 00:18:03.497 "sequence_count": 2048, 00:18:03.497 "buf_count": 2048 00:18:03.497 } 00:18:03.497 } 00:18:03.497 ] 00:18:03.497 }, 00:18:03.497 { 00:18:03.497 "subsystem": "bdev", 00:18:03.497 "config": [ 00:18:03.497 { 00:18:03.497 "method": "bdev_set_options", 00:18:03.497 "params": { 00:18:03.497 "bdev_io_pool_size": 65535, 00:18:03.497 "bdev_io_cache_size": 256, 00:18:03.497 "bdev_auto_examine": true, 00:18:03.497 "iobuf_small_cache_size": 128, 00:18:03.497 "iobuf_large_cache_size": 16 00:18:03.497 } 00:18:03.497 }, 00:18:03.497 { 00:18:03.497 "method": "bdev_raid_set_options", 00:18:03.497 "params": { 00:18:03.497 "process_window_size_kb": 1024, 00:18:03.497 "process_max_bandwidth_mb_sec": 0 00:18:03.497 } 00:18:03.497 }, 00:18:03.497 { 00:18:03.497 "method": "bdev_iscsi_set_options", 00:18:03.497 "params": { 00:18:03.497 "timeout_sec": 30 00:18:03.497 } 00:18:03.497 }, 00:18:03.497 { 00:18:03.497 "method": "bdev_nvme_set_options", 00:18:03.497 "params": { 00:18:03.497 "action_on_timeout": "none", 00:18:03.497 "timeout_us": 0, 00:18:03.497 "timeout_admin_us": 0, 00:18:03.497 "keep_alive_timeout_ms": 10000, 00:18:03.497 "arbitration_burst": 0, 00:18:03.497 "low_priority_weight": 0, 00:18:03.497 "medium_priority_weight": 0, 00:18:03.497 "high_priority_weight": 0, 00:18:03.497 "nvme_adminq_poll_period_us": 10000, 00:18:03.497 "nvme_ioq_poll_period_us": 0, 00:18:03.497 "io_queue_requests": 512, 00:18:03.497 "delay_cmd_submit": true, 00:18:03.497 "transport_retry_count": 4, 00:18:03.497 "bdev_retry_count": 3, 00:18:03.497 "transport_ack_timeout": 0, 00:18:03.497 "ctrlr_loss_timeout_sec": 0, 00:18:03.497 "reconnect_delay_sec": 0, 00:18:03.497 "fast_io_fail_timeout_sec": 0, 00:18:03.497 "disable_auto_failback": false, 00:18:03.497 "generate_uuids": false, 00:18:03.497 "transport_tos": 0, 00:18:03.497 "nvme_error_stat": false, 00:18:03.497 "rdma_srq_size": 0, 00:18:03.497 "io_path_stat": false, 00:18:03.497 "allow_accel_sequence": false, 00:18:03.497 "rdma_max_cq_size": 0, 00:18:03.497 "rdma_cm_event_timeout_ms": 0, 00:18:03.497 "dhchap_digests": [ 00:18:03.497 "sha256", 00:18:03.497 "sha384", 00:18:03.497 "sha512" 00:18:03.497 ], 00:18:03.497 "dhchap_dhgroups": [ 00:18:03.497 "null", 00:18:03.497 "ffdhe2048", 00:18:03.497 "ffdhe3072", 00:18:03.497 "ffdhe4096", 00:18:03.497 "ffdhe6144", 00:18:03.497 "ffdhe8192" 00:18:03.497 ] 00:18:03.497 } 00:18:03.497 }, 00:18:03.497 { 00:18:03.497 "method": "bdev_nvme_attach_controller", 00:18:03.497 "params": { 00:18:03.497 "name": "nvme0", 00:18:03.497 "trtype": "TCP", 00:18:03.497 "adrfam": "IPv4", 00:18:03.497 "traddr": "10.0.0.2", 00:18:03.497 "trsvcid": "4420", 00:18:03.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.497 "prchk_reftag": false, 00:18:03.497 "prchk_guard": false, 00:18:03.497 "ctrlr_loss_timeout_sec": 0, 00:18:03.497 "reconnect_delay_sec": 0, 00:18:03.497 "fast_io_fail_timeout_sec": 0, 00:18:03.497 "psk": "key0", 00:18:03.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.497 "hdgst": false, 00:18:03.497 "ddgst": false, 00:18:03.497 "multipath": "multipath" 00:18:03.497 } 00:18:03.497 }, 00:18:03.497 { 00:18:03.497 "method": "bdev_nvme_set_hotplug", 00:18:03.497 "params": { 00:18:03.497 "period_us": 100000, 00:18:03.497 "enable": false 00:18:03.497 } 00:18:03.497 }, 00:18:03.497 { 00:18:03.497 "method": "bdev_enable_histogram", 00:18:03.497 "params": { 00:18:03.497 "name": "nvme0n1", 00:18:03.497 "enable": true 00:18:03.497 } 00:18:03.497 }, 00:18:03.497 { 00:18:03.497 "method": "bdev_wait_for_examine" 00:18:03.497 } 00:18:03.497 ] 00:18:03.497 }, 00:18:03.497 { 00:18:03.497 "subsystem": "nbd", 00:18:03.497 "config": [] 00:18:03.497 } 00:18:03.497 ] 00:18:03.497 }' 00:18:03.497 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3603588 00:18:03.497 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3603588 ']' 00:18:03.497 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3603588 00:18:03.497 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:03.497 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.497 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3603588 00:18:03.497 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:03.497 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:03.497 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3603588' 00:18:03.497 killing process with pid 3603588 00:18:03.497 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3603588 00:18:03.497 Received shutdown signal, test time was about 1.000000 seconds 00:18:03.497 00:18:03.497 Latency(us) 00:18:03.497 [2024-12-09T04:12:40.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.497 [2024-12-09T04:12:40.143Z] =================================================================================================================== 00:18:03.497 [2024-12-09T04:12:40.143Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:03.497 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3603588 00:18:03.497 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3603453 00:18:03.497 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3603453 ']' 00:18:03.497 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3603453 00:18:03.497 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:03.756 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.756 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3603453 00:18:03.756 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:03.756 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:03.756 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3603453' 00:18:03.756 killing process with pid 3603453 00:18:03.756 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3603453 00:18:03.756 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3603453 00:18:03.757 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:03.757 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:03.757 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:03.757 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:03.757 "subsystems": [ 00:18:03.757 { 00:18:03.757 "subsystem": "keyring", 00:18:03.757 "config": [ 00:18:03.757 { 00:18:03.757 "method": "keyring_file_add_key", 00:18:03.757 "params": { 00:18:03.757 "name": "key0", 00:18:03.757 "path": "/tmp/tmp.gMWyOCJQJN" 00:18:03.757 } 00:18:03.757 } 00:18:03.757 ] 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "subsystem": "iobuf", 00:18:03.757 "config": [ 00:18:03.757 { 00:18:03.757 "method": "iobuf_set_options", 00:18:03.757 "params": { 00:18:03.757 "small_pool_count": 8192, 00:18:03.757 "large_pool_count": 1024, 00:18:03.757 "small_bufsize": 8192, 00:18:03.757 "large_bufsize": 135168, 00:18:03.757 "enable_numa": false 00:18:03.757 } 00:18:03.757 } 00:18:03.757 ] 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "subsystem": "sock", 00:18:03.757 "config": [ 00:18:03.757 { 00:18:03.757 "method": "sock_set_default_impl", 00:18:03.757 "params": { 00:18:03.757 "impl_name": "posix" 00:18:03.757 } 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "method": "sock_impl_set_options", 00:18:03.757 "params": { 00:18:03.757 "impl_name": "ssl", 00:18:03.757 "recv_buf_size": 4096, 00:18:03.757 "send_buf_size": 4096, 00:18:03.757 "enable_recv_pipe": true, 00:18:03.757 "enable_quickack": false, 00:18:03.757 "enable_placement_id": 0, 00:18:03.757 "enable_zerocopy_send_server": true, 00:18:03.757 "enable_zerocopy_send_client": false, 00:18:03.757 "zerocopy_threshold": 0, 00:18:03.757 "tls_version": 0, 00:18:03.757 "enable_ktls": false 00:18:03.757 } 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "method": "sock_impl_set_options", 00:18:03.757 "params": { 00:18:03.757 "impl_name": "posix", 00:18:03.757 "recv_buf_size": 2097152, 00:18:03.757 "send_buf_size": 2097152, 00:18:03.757 "enable_recv_pipe": true, 00:18:03.757 "enable_quickack": false, 00:18:03.757 "enable_placement_id": 0, 00:18:03.757 "enable_zerocopy_send_server": true, 00:18:03.757 "enable_zerocopy_send_client": false, 00:18:03.757 "zerocopy_threshold": 0, 00:18:03.757 "tls_version": 0, 00:18:03.757 "enable_ktls": false 00:18:03.757 } 00:18:03.757 } 00:18:03.757 ] 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "subsystem": "vmd", 00:18:03.757 "config": [] 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "subsystem": "accel", 00:18:03.757 "config": [ 00:18:03.757 { 00:18:03.757 "method": "accel_set_options", 00:18:03.757 "params": { 00:18:03.757 "small_cache_size": 128, 00:18:03.757 "large_cache_size": 16, 00:18:03.757 "task_count": 2048, 00:18:03.757 "sequence_count": 2048, 00:18:03.757 "buf_count": 2048 00:18:03.757 } 00:18:03.757 } 00:18:03.757 ] 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "subsystem": "bdev", 00:18:03.757 "config": [ 00:18:03.757 { 00:18:03.757 "method": "bdev_set_options", 00:18:03.757 "params": { 00:18:03.757 "bdev_io_pool_size": 65535, 00:18:03.757 "bdev_io_cache_size": 256, 00:18:03.757 "bdev_auto_examine": true, 00:18:03.757 "iobuf_small_cache_size": 128, 00:18:03.757 "iobuf_large_cache_size": 16 00:18:03.757 } 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "method": "bdev_raid_set_options", 00:18:03.757 "params": { 00:18:03.757 "process_window_size_kb": 1024, 00:18:03.757 "process_max_bandwidth_mb_sec": 0 00:18:03.757 } 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "method": "bdev_iscsi_set_options", 00:18:03.757 "params": { 00:18:03.757 "timeout_sec": 30 00:18:03.757 } 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "method": "bdev_nvme_set_options", 00:18:03.757 "params": { 00:18:03.757 "action_on_timeout": "none", 00:18:03.757 "timeout_us": 0, 00:18:03.757 "timeout_admin_us": 0, 00:18:03.757 "keep_alive_timeout_ms": 10000, 00:18:03.757 "arbitration_burst": 0, 00:18:03.757 "low_priority_weight": 0, 00:18:03.757 "medium_priority_weight": 0, 00:18:03.757 "high_priority_weight": 0, 00:18:03.757 "nvme_adminq_poll_period_us": 10000, 00:18:03.757 "nvme_ioq_poll_period_us": 0, 00:18:03.757 "io_queue_requests": 0, 00:18:03.757 "delay_cmd_submit": true, 00:18:03.757 "transport_retry_count": 4, 00:18:03.757 "bdev_retry_count": 3, 00:18:03.757 "transport_ack_timeout": 0, 00:18:03.757 "ctrlr_loss_timeout_sec": 0, 00:18:03.757 "reconnect_delay_sec": 0, 00:18:03.757 "fast_io_fail_timeout_sec": 0, 00:18:03.757 "disable_auto_failback": false, 00:18:03.757 "generate_uuids": false, 00:18:03.757 "transport_tos": 0, 00:18:03.757 "nvme_error_stat": false, 00:18:03.757 "rdma_srq_size": 0, 00:18:03.757 "io_path_stat": false, 00:18:03.757 "allow_accel_sequence": false, 00:18:03.757 "rdma_max_cq_size": 0, 00:18:03.757 "rdma_cm_event_timeout_ms": 0, 00:18:03.757 "dhchap_digests": [ 00:18:03.757 "sha256", 00:18:03.757 "sha384", 00:18:03.757 "sha512" 00:18:03.757 ], 00:18:03.757 "dhchap_dhgroups": [ 00:18:03.757 "null", 00:18:03.757 "ffdhe2048", 00:18:03.757 "ffdhe3072", 00:18:03.757 "ffdhe4096", 00:18:03.757 "ffdhe6144", 00:18:03.757 "ffdhe8192" 00:18:03.757 ] 00:18:03.757 } 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "method": "bdev_nvme_set_hotplug", 00:18:03.757 "params": { 00:18:03.757 "period_us": 100000, 00:18:03.757 "enable": false 00:18:03.757 } 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "method": "bdev_malloc_create", 00:18:03.757 "params": { 00:18:03.757 "name": "malloc0", 00:18:03.757 "num_blocks": 8192, 00:18:03.757 "block_size": 4096, 00:18:03.757 "physical_block_size": 4096, 00:18:03.757 "uuid": "b77cb4ec-e1cb-4620-bf87-b4f221be8845", 00:18:03.757 "optimal_io_boundary": 0, 00:18:03.757 "md_size": 0, 00:18:03.757 "dif_type": 0, 00:18:03.757 "dif_is_head_of_md": false, 00:18:03.757 "dif_pi_format": 0 00:18:03.757 } 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "method": "bdev_wait_for_examine" 00:18:03.757 } 00:18:03.757 ] 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "subsystem": "nbd", 00:18:03.757 "config": [] 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "subsystem": "scheduler", 00:18:03.757 "config": [ 00:18:03.757 { 00:18:03.757 "method": "framework_set_scheduler", 00:18:03.757 "params": { 00:18:03.757 "name": "static" 00:18:03.757 } 00:18:03.757 } 00:18:03.757 ] 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "subsystem": "nvmf", 00:18:03.757 "config": [ 00:18:03.757 { 00:18:03.757 "method": "nvmf_set_config", 00:18:03.757 "params": { 00:18:03.757 "discovery_filter": "match_any", 00:18:03.757 "admin_cmd_passthru": { 00:18:03.757 "identify_ctrlr": false 00:18:03.757 }, 00:18:03.757 "dhchap_digests": [ 00:18:03.757 "sha256", 00:18:03.757 "sha384", 00:18:03.757 "sha512" 00:18:03.757 ], 00:18:03.757 "dhchap_dhgroups": [ 00:18:03.757 "null", 00:18:03.757 "ffdhe2048", 00:18:03.757 "ffdhe3072", 00:18:03.757 "ffdhe4096", 00:18:03.757 "ffdhe6144", 00:18:03.757 "ffdhe8192" 00:18:03.757 ] 00:18:03.757 } 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "method": "nvmf_set_max_subsystems", 00:18:03.757 "params": { 00:18:03.757 "max_subsystems": 1024 00:18:03.757 } 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "method": "nvmf_set_crdt", 00:18:03.757 "params": { 00:18:03.757 "crdt1": 0, 00:18:03.757 "crdt2": 0, 00:18:03.757 "crdt3": 0 00:18:03.757 } 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "method": "nvmf_create_transport", 00:18:03.757 "params": { 00:18:03.757 "trtype": "TCP", 00:18:03.757 "max_queue_depth": 128, 00:18:03.757 "max_io_qpairs_per_ctrlr": 127, 00:18:03.757 "in_capsule_data_size": 4096, 00:18:03.757 "max_io_size": 131072, 00:18:03.757 "io_unit_size": 131072, 00:18:03.757 "max_aq_depth": 128, 00:18:03.757 "num_shared_buffers": 511, 00:18:03.757 "buf_cache_size": 4294967295, 00:18:03.757 "dif_insert_or_strip": false, 00:18:03.757 "zcopy": false, 00:18:03.757 "c2h_success": false, 00:18:03.757 "sock_priority": 0, 00:18:03.757 "abort_timeout_sec": 1, 00:18:03.757 "ack_timeout": 0, 00:18:03.757 "data_wr_pool_size": 0 00:18:03.757 } 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "method": "nvmf_create_subsystem", 00:18:03.757 "params": { 00:18:03.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.757 "allow_any_host": false, 00:18:03.757 "serial_number": "00000000000000000000", 00:18:03.757 "model_number": "SPDK bdev Controller", 00:18:03.757 "max_namespaces": 32, 00:18:03.757 "min_cntlid": 1, 00:18:03.757 "max_cntlid": 65519, 00:18:03.757 "ana_reporting": false 00:18:03.757 } 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "method": "nvmf_subsystem_add_host", 00:18:03.757 "params": { 00:18:03.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.757 "host": "nqn.2016-06.io.spdk:host1", 00:18:03.757 "psk": "key0" 00:18:03.757 } 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "method": "nvmf_subsystem_add_ns", 00:18:03.757 "params": { 00:18:03.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.757 "namespace": { 00:18:03.757 "nsid": 1, 00:18:03.757 "bdev_name": "malloc0", 00:18:03.757 "nguid": "B77CB4ECE1CB4620BF87B4F221BE8845", 00:18:03.757 "uuid": "b77cb4ec-e1cb-4620-bf87-b4f221be8845", 00:18:03.757 "no_auto_visible": false 00:18:03.757 } 00:18:03.757 } 00:18:03.757 }, 00:18:03.757 { 00:18:03.757 "method": "nvmf_subsystem_add_listener", 00:18:03.757 "params": { 00:18:03.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.757 "listen_address": { 00:18:03.757 "trtype": "TCP", 00:18:03.757 "adrfam": "IPv4", 00:18:03.757 "traddr": "10.0.0.2", 00:18:03.757 "trsvcid": "4420" 00:18:03.757 }, 00:18:03.757 "secure_channel": false, 00:18:03.757 "sock_impl": "ssl" 00:18:03.757 } 00:18:03.757 } 00:18:03.757 ] 00:18:03.757 } 00:18:03.757 ] 00:18:03.757 }' 00:18:03.757 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.016 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3603996 00:18:04.016 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:04.016 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3603996 00:18:04.016 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3603996 ']' 00:18:04.016 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.016 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.016 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.016 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.016 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.016 [2024-12-09 05:12:40.456894] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:18:04.016 [2024-12-09 05:12:40.456944] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.016 [2024-12-09 05:12:40.527928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.016 [2024-12-09 05:12:40.569122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.016 [2024-12-09 05:12:40.569160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.016 [2024-12-09 05:12:40.569167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:04.016 [2024-12-09 05:12:40.569176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:04.016 [2024-12-09 05:12:40.569181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.016 [2024-12-09 05:12:40.569783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.274 [2024-12-09 05:12:40.783058] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.274 [2024-12-09 05:12:40.815100] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:04.274 [2024-12-09 05:12:40.815312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.842 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.842 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:04.842 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:04.842 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:04.842 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.842 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.842 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3604196 00:18:04.842 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3604196 /var/tmp/bdevperf.sock 00:18:04.842 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3604196 ']' 00:18:04.842 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.842 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:04.842 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.842 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.842 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:04.842 "subsystems": [ 00:18:04.842 { 00:18:04.842 "subsystem": "keyring", 00:18:04.842 "config": [ 00:18:04.842 { 00:18:04.842 "method": "keyring_file_add_key", 00:18:04.842 "params": { 00:18:04.842 "name": "key0", 00:18:04.842 "path": "/tmp/tmp.gMWyOCJQJN" 00:18:04.842 } 00:18:04.842 } 00:18:04.842 ] 00:18:04.842 }, 00:18:04.842 { 00:18:04.842 "subsystem": "iobuf", 00:18:04.842 "config": [ 00:18:04.842 { 00:18:04.842 "method": "iobuf_set_options", 00:18:04.842 "params": { 00:18:04.842 "small_pool_count": 8192, 00:18:04.842 "large_pool_count": 1024, 00:18:04.842 "small_bufsize": 8192, 00:18:04.842 "large_bufsize": 135168, 00:18:04.842 "enable_numa": false 00:18:04.842 } 00:18:04.842 } 00:18:04.842 ] 00:18:04.842 }, 00:18:04.842 { 00:18:04.842 "subsystem": "sock", 00:18:04.842 "config": [ 00:18:04.842 { 00:18:04.842 "method": "sock_set_default_impl", 00:18:04.842 "params": { 00:18:04.842 "impl_name": "posix" 00:18:04.842 } 00:18:04.842 }, 00:18:04.842 { 00:18:04.842 "method": "sock_impl_set_options", 00:18:04.842 "params": { 00:18:04.842 "impl_name": "ssl", 00:18:04.842 "recv_buf_size": 4096, 00:18:04.842 "send_buf_size": 4096, 00:18:04.842 "enable_recv_pipe": true, 00:18:04.842 "enable_quickack": false, 00:18:04.842 "enable_placement_id": 0, 00:18:04.842 "enable_zerocopy_send_server": true, 00:18:04.842 "enable_zerocopy_send_client": false, 00:18:04.842 "zerocopy_threshold": 0, 00:18:04.842 "tls_version": 0, 00:18:04.842 "enable_ktls": false 00:18:04.842 } 00:18:04.842 }, 00:18:04.842 { 00:18:04.842 "method": "sock_impl_set_options", 00:18:04.842 "params": { 00:18:04.842 "impl_name": "posix", 00:18:04.842 "recv_buf_size": 2097152, 00:18:04.842 "send_buf_size": 2097152, 00:18:04.842 "enable_recv_pipe": true, 00:18:04.842 "enable_quickack": false, 00:18:04.842 "enable_placement_id": 0, 00:18:04.842 "enable_zerocopy_send_server": true, 00:18:04.842 "enable_zerocopy_send_client": false, 00:18:04.842 "zerocopy_threshold": 0, 00:18:04.842 "tls_version": 0, 00:18:04.842 "enable_ktls": false 00:18:04.842 } 00:18:04.842 } 00:18:04.842 ] 00:18:04.842 }, 00:18:04.842 { 00:18:04.842 "subsystem": "vmd", 00:18:04.842 "config": [] 00:18:04.842 }, 00:18:04.842 { 00:18:04.842 "subsystem": "accel", 00:18:04.842 "config": [ 00:18:04.842 { 00:18:04.842 "method": "accel_set_options", 00:18:04.842 "params": { 00:18:04.842 "small_cache_size": 128, 00:18:04.842 "large_cache_size": 16, 00:18:04.842 "task_count": 2048, 00:18:04.842 "sequence_count": 2048, 00:18:04.842 "buf_count": 2048 00:18:04.842 } 00:18:04.842 } 00:18:04.842 ] 00:18:04.842 }, 00:18:04.842 { 00:18:04.842 "subsystem": "bdev", 00:18:04.842 "config": [ 00:18:04.842 { 00:18:04.842 "method": "bdev_set_options", 00:18:04.842 "params": { 00:18:04.842 "bdev_io_pool_size": 65535, 00:18:04.842 "bdev_io_cache_size": 256, 00:18:04.842 "bdev_auto_examine": true, 00:18:04.842 "iobuf_small_cache_size": 128, 00:18:04.842 "iobuf_large_cache_size": 16 00:18:04.842 } 00:18:04.842 }, 00:18:04.842 { 00:18:04.842 "method": "bdev_raid_set_options", 00:18:04.842 "params": { 00:18:04.842 "process_window_size_kb": 1024, 00:18:04.842 "process_max_bandwidth_mb_sec": 0 00:18:04.842 } 00:18:04.842 }, 00:18:04.842 { 00:18:04.842 "method": "bdev_iscsi_set_options", 00:18:04.842 "params": { 00:18:04.842 "timeout_sec": 30 00:18:04.842 } 00:18:04.842 }, 00:18:04.842 { 00:18:04.842 "method": "bdev_nvme_set_options", 00:18:04.842 "params": { 00:18:04.842 "action_on_timeout": "none", 00:18:04.842 "timeout_us": 0, 00:18:04.842 "timeout_admin_us": 0, 00:18:04.842 "keep_alive_timeout_ms": 10000, 00:18:04.842 "arbitration_burst": 0, 00:18:04.842 "low_priority_weight": 0, 00:18:04.842 "medium_priority_weight": 0, 00:18:04.842 "high_priority_weight": 0, 00:18:04.842 "nvme_adminq_poll_period_us": 10000, 00:18:04.842 "nvme_ioq_poll_period_us": 0, 00:18:04.842 "io_queue_requests": 512, 00:18:04.842 "delay_cmd_submit": true, 00:18:04.842 "transport_retry_count": 4, 00:18:04.842 "bdev_retry_count": 3, 00:18:04.842 "transport_ack_timeout": 0, 00:18:04.842 "ctrlr_loss_timeout_sec": 0, 00:18:04.842 "reconnect_delay_sec": 0, 00:18:04.842 "fast_io_fail_timeout_sec": 0, 00:18:04.842 "disable_auto_failback": false, 00:18:04.842 "generate_uuids": false, 00:18:04.842 "transport_tos": 0, 00:18:04.842 "nvme_error_stat": false, 00:18:04.842 "rdma_srq_size": 0, 00:18:04.842 "io_path_stat": false, 00:18:04.842 "allow_accel_sequence": false, 00:18:04.842 "rdma_max_cq_size": 0, 00:18:04.842 "rdma_cm_event_timeout_ms": 0, 00:18:04.842 "dhchap_digests": [ 00:18:04.842 "sha256", 00:18:04.842 "sha384", 00:18:04.842 "sha512" 00:18:04.842 ], 00:18:04.842 "dhchap_dhgroups": [ 00:18:04.842 "null", 00:18:04.842 "ffdhe2048", 00:18:04.842 "ffdhe3072", 00:18:04.842 "ffdhe4096", 00:18:04.842 "ffdhe6144", 00:18:04.842 "ffdhe8192" 00:18:04.842 ] 00:18:04.842 } 00:18:04.842 }, 00:18:04.842 { 00:18:04.842 "method": "bdev_nvme_attach_controller", 00:18:04.842 "params": { 00:18:04.842 "name": "nvme0", 00:18:04.842 "trtype": "TCP", 00:18:04.842 "adrfam": "IPv4", 00:18:04.842 "traddr": "10.0.0.2", 00:18:04.842 "trsvcid": "4420", 00:18:04.842 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.842 "prchk_reftag": false, 00:18:04.842 "prchk_guard": false, 00:18:04.842 "ctrlr_loss_timeout_sec": 0, 00:18:04.842 "reconnect_delay_sec": 0, 00:18:04.842 "fast_io_fail_timeout_sec": 0, 00:18:04.842 "psk": "key0", 00:18:04.842 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.842 "hdgst": false, 00:18:04.842 "ddgst": false, 00:18:04.842 "multipath": "multipath" 00:18:04.842 } 00:18:04.842 }, 00:18:04.842 { 00:18:04.842 "method": "bdev_nvme_set_hotplug", 00:18:04.843 "params": { 00:18:04.843 "period_us": 100000, 00:18:04.843 "enable": false 00:18:04.843 } 00:18:04.843 }, 00:18:04.843 { 00:18:04.843 "method": "bdev_enable_histogram", 00:18:04.843 "params": { 00:18:04.843 "name": "nvme0n1", 00:18:04.843 "enable": true 00:18:04.843 } 00:18:04.843 }, 00:18:04.843 { 00:18:04.843 "method": "bdev_wait_for_examine" 00:18:04.843 } 00:18:04.843 ] 00:18:04.843 }, 00:18:04.843 { 00:18:04.843 "subsystem": "nbd", 00:18:04.843 "config": [] 00:18:04.843 } 00:18:04.843 ] 00:18:04.843 }' 00:18:04.843 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.843 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.843 [2024-12-09 05:12:41.377519] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:18:04.843 [2024-12-09 05:12:41.377568] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3604196 ] 00:18:04.843 [2024-12-09 05:12:41.441319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.843 [2024-12-09 05:12:41.482901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.101 [2024-12-09 05:12:41.637550] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:05.669 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.669 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:05.669 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:05.669 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:05.928 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.928 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:05.928 Running I/O for 1 seconds... 00:18:07.124 5456.00 IOPS, 21.31 MiB/s 00:18:07.124 Latency(us) 00:18:07.124 [2024-12-09T04:12:43.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.124 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:07.124 Verification LBA range: start 0x0 length 0x2000 00:18:07.124 nvme0n1 : 1.02 5459.92 21.33 0.00 0.00 23217.23 4758.48 24960.67 00:18:07.124 [2024-12-09T04:12:43.770Z] =================================================================================================================== 00:18:07.124 [2024-12-09T04:12:43.770Z] Total : 5459.92 21.33 0.00 0.00 23217.23 4758.48 24960.67 00:18:07.124 { 00:18:07.124 "results": [ 00:18:07.124 { 00:18:07.124 "job": "nvme0n1", 00:18:07.124 "core_mask": "0x2", 00:18:07.124 "workload": "verify", 00:18:07.124 "status": "finished", 00:18:07.124 "verify_range": { 00:18:07.124 "start": 0, 00:18:07.124 "length": 8192 00:18:07.124 }, 00:18:07.124 "queue_depth": 128, 00:18:07.124 "io_size": 4096, 00:18:07.124 "runtime": 1.022909, 00:18:07.124 "iops": 5459.918721997754, 00:18:07.124 "mibps": 21.327807507803726, 00:18:07.124 "io_failed": 0, 00:18:07.124 "io_timeout": 0, 00:18:07.124 "avg_latency_us": 23217.228089525517, 00:18:07.124 "min_latency_us": 4758.48347826087, 00:18:07.124 "max_latency_us": 24960.667826086956 00:18:07.124 } 00:18:07.124 ], 00:18:07.124 "core_count": 1 00:18:07.124 } 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:07.124 nvmf_trace.0 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3604196 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3604196 ']' 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3604196 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3604196 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3604196' 00:18:07.124 killing process with pid 3604196 00:18:07.124 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3604196 00:18:07.124 Received shutdown signal, test time was about 1.000000 seconds 00:18:07.124 00:18:07.124 Latency(us) 00:18:07.124 [2024-12-09T04:12:43.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.125 [2024-12-09T04:12:43.771Z] =================================================================================================================== 00:18:07.125 [2024-12-09T04:12:43.771Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:07.125 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3604196 00:18:07.383 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:07.383 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:07.383 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:07.383 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:07.383 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:07.383 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:07.383 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:07.383 rmmod nvme_tcp 00:18:07.383 rmmod nvme_fabrics 00:18:07.383 rmmod nvme_keyring 00:18:07.383 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:07.383 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:07.383 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:07.383 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3603996 ']' 00:18:07.383 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3603996 00:18:07.384 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3603996 ']' 00:18:07.384 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3603996 00:18:07.384 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:07.384 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.384 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3603996 00:18:07.384 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:07.384 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:07.384 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3603996' 00:18:07.384 killing process with pid 3603996 00:18:07.384 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3603996 00:18:07.384 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3603996 00:18:07.642 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:07.642 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:07.642 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:07.642 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:07.642 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:07.642 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:07.642 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:07.642 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:07.642 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:07.642 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.642 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:07.642 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.HIDeWI0IJm /tmp/tmp.AVdmcJmk7w /tmp/tmp.gMWyOCJQJN 00:18:10.177 00:18:10.177 real 1m19.547s 00:18:10.177 user 2m2.581s 00:18:10.177 sys 0m29.545s 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.177 ************************************ 00:18:10.177 END TEST nvmf_tls 00:18:10.177 ************************************ 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:10.177 ************************************ 00:18:10.177 START TEST nvmf_fips 00:18:10.177 ************************************ 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:10.177 * Looking for test storage... 00:18:10.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:10.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.177 --rc genhtml_branch_coverage=1 00:18:10.177 --rc genhtml_function_coverage=1 00:18:10.177 --rc genhtml_legend=1 00:18:10.177 --rc geninfo_all_blocks=1 00:18:10.177 --rc geninfo_unexecuted_blocks=1 00:18:10.177 00:18:10.177 ' 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:10.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.177 --rc genhtml_branch_coverage=1 00:18:10.177 --rc genhtml_function_coverage=1 00:18:10.177 --rc genhtml_legend=1 00:18:10.177 --rc geninfo_all_blocks=1 00:18:10.177 --rc geninfo_unexecuted_blocks=1 00:18:10.177 00:18:10.177 ' 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:10.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.177 --rc genhtml_branch_coverage=1 00:18:10.177 --rc genhtml_function_coverage=1 00:18:10.177 --rc genhtml_legend=1 00:18:10.177 --rc geninfo_all_blocks=1 00:18:10.177 --rc geninfo_unexecuted_blocks=1 00:18:10.177 00:18:10.177 ' 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:10.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.177 --rc genhtml_branch_coverage=1 00:18:10.177 --rc genhtml_function_coverage=1 00:18:10.177 --rc genhtml_legend=1 00:18:10.177 --rc geninfo_all_blocks=1 00:18:10.177 --rc geninfo_unexecuted_blocks=1 00:18:10.177 00:18:10.177 ' 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:10.177 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:10.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:10.178 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:10.179 Error setting digest 00:18:10.179 4082EEB7767F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:10.179 4082EEB7767F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:18:10.179 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:15.444 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:15.444 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:15.444 Found net devices under 0000:86:00.0: cvl_0_0 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:15.444 Found net devices under 0000:86:00.1: cvl_0_1 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:15.444 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:15.444 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:15.444 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:15.444 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:15.444 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:15.703 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:15.703 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:15.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:18:15.704 00:18:15.704 --- 10.0.0.2 ping statistics --- 00:18:15.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.704 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:15.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:18:15.704 00:18:15.704 --- 10.0.0.1 ping statistics --- 00:18:15.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.704 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3608207 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3608207 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3608207 ']' 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.704 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:15.704 [2024-12-09 05:12:52.267927] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:18:15.704 [2024-12-09 05:12:52.267977] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.704 [2024-12-09 05:12:52.336854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.962 [2024-12-09 05:12:52.378065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.962 [2024-12-09 05:12:52.378097] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.962 [2024-12-09 05:12:52.378106] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.962 [2024-12-09 05:12:52.378112] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.962 [2024-12-09 05:12:52.378118] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.962 [2024-12-09 05:12:52.378728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.528 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.528 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:16.528 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:16.528 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:16.528 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:16.528 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.528 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:16.528 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:16.528 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:16.528 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.N53 00:18:16.528 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:16.528 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.N53 00:18:16.528 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.N53 00:18:16.528 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.N53 00:18:16.528 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:16.787 [2024-12-09 05:12:53.305697] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.787 [2024-12-09 05:12:53.321706] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:16.787 [2024-12-09 05:12:53.321903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.787 malloc0 00:18:16.787 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:16.787 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3608459 00:18:16.787 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:16.787 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3608459 /var/tmp/bdevperf.sock 00:18:16.787 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3608459 ']' 00:18:16.787 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.787 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.787 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.787 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.787 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:17.045 [2024-12-09 05:12:53.448608] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:18:17.045 [2024-12-09 05:12:53.448657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3608459 ] 00:18:17.045 [2024-12-09 05:12:53.509526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.045 [2024-12-09 05:12:53.551052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.045 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.045 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:17.045 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.N53 00:18:17.303 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:17.562 [2024-12-09 05:12:53.998637] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:17.562 TLSTESTn1 00:18:17.562 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:17.562 Running I/O for 10 seconds... 00:18:19.870 5282.00 IOPS, 20.63 MiB/s [2024-12-09T04:12:57.449Z] 5501.50 IOPS, 21.49 MiB/s [2024-12-09T04:12:58.384Z] 5539.33 IOPS, 21.64 MiB/s [2024-12-09T04:12:59.319Z] 5565.25 IOPS, 21.74 MiB/s [2024-12-09T04:13:00.254Z] 5594.20 IOPS, 21.85 MiB/s [2024-12-09T04:13:01.198Z] 5583.33 IOPS, 21.81 MiB/s [2024-12-09T04:13:02.757Z] 5552.57 IOPS, 21.69 MiB/s [2024-12-09T04:13:03.351Z] 5534.62 IOPS, 21.62 MiB/s [2024-12-09T04:13:04.287Z] 5535.67 IOPS, 21.62 MiB/s [2024-12-09T04:13:04.287Z] 5534.80 IOPS, 21.62 MiB/s 00:18:27.641 Latency(us) 00:18:27.641 [2024-12-09T04:13:04.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.641 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:27.641 Verification LBA range: start 0x0 length 0x2000 00:18:27.641 TLSTESTn1 : 10.02 5537.66 21.63 0.00 0.00 23076.56 7408.42 28151.99 00:18:27.641 [2024-12-09T04:13:04.287Z] =================================================================================================================== 00:18:27.641 [2024-12-09T04:13:04.287Z] Total : 5537.66 21.63 0.00 0.00 23076.56 7408.42 28151.99 00:18:27.641 { 00:18:27.641 "results": [ 00:18:27.641 { 00:18:27.641 "job": "TLSTESTn1", 00:18:27.641 "core_mask": "0x4", 00:18:27.641 "workload": "verify", 00:18:27.641 "status": "finished", 00:18:27.641 "verify_range": { 00:18:27.641 "start": 0, 00:18:27.641 "length": 8192 00:18:27.641 }, 00:18:27.641 "queue_depth": 128, 00:18:27.641 "io_size": 4096, 00:18:27.641 "runtime": 10.017767, 00:18:27.641 "iops": 5537.661237279725, 00:18:27.641 "mibps": 21.631489208123927, 00:18:27.641 "io_failed": 0, 00:18:27.641 "io_timeout": 0, 00:18:27.641 "avg_latency_us": 23076.558868585536, 00:18:27.641 "min_latency_us": 7408.417391304348, 00:18:27.641 "max_latency_us": 28151.98608695652 00:18:27.641 } 00:18:27.641 ], 00:18:27.641 "core_count": 1 00:18:27.641 } 00:18:27.641 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:27.641 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:27.641 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:18:27.641 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:18:27.641 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:27.641 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:27.641 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:27.641 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:27.641 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:27.641 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:27.641 nvmf_trace.0 00:18:27.899 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:18:27.899 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3608459 00:18:27.899 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3608459 ']' 00:18:27.899 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3608459 00:18:27.899 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:18:27.899 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.899 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3608459 00:18:27.899 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:27.899 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:27.899 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3608459' 00:18:27.899 killing process with pid 3608459 00:18:27.899 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3608459 00:18:27.899 Received shutdown signal, test time was about 10.000000 seconds 00:18:27.899 00:18:27.899 Latency(us) 00:18:27.899 [2024-12-09T04:13:04.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.899 [2024-12-09T04:13:04.545Z] =================================================================================================================== 00:18:27.899 [2024-12-09T04:13:04.545Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:27.899 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3608459 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:28.158 rmmod nvme_tcp 00:18:28.158 rmmod nvme_fabrics 00:18:28.158 rmmod nvme_keyring 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3608207 ']' 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3608207 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3608207 ']' 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3608207 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3608207 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3608207' 00:18:28.158 killing process with pid 3608207 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3608207 00:18:28.158 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3608207 00:18:28.418 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:28.418 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:28.418 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:28.418 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:18:28.418 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:18:28.418 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:18:28.418 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:28.418 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:28.418 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:28.418 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.418 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:28.418 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.328 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:30.328 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.N53 00:18:30.328 00:18:30.328 real 0m20.627s 00:18:30.328 user 0m22.454s 00:18:30.328 sys 0m8.763s 00:18:30.328 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:30.328 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:30.328 ************************************ 00:18:30.328 END TEST nvmf_fips 00:18:30.328 ************************************ 00:18:30.587 05:13:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:30.587 05:13:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:30.587 05:13:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:30.587 05:13:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:30.587 ************************************ 00:18:30.587 START TEST nvmf_control_msg_list 00:18:30.587 ************************************ 00:18:30.587 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:30.587 * Looking for test storage... 00:18:30.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:30.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.587 --rc genhtml_branch_coverage=1 00:18:30.587 --rc genhtml_function_coverage=1 00:18:30.587 --rc genhtml_legend=1 00:18:30.587 --rc geninfo_all_blocks=1 00:18:30.587 --rc geninfo_unexecuted_blocks=1 00:18:30.587 00:18:30.587 ' 00:18:30.587 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:30.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.587 --rc genhtml_branch_coverage=1 00:18:30.587 --rc genhtml_function_coverage=1 00:18:30.587 --rc genhtml_legend=1 00:18:30.588 --rc geninfo_all_blocks=1 00:18:30.588 --rc geninfo_unexecuted_blocks=1 00:18:30.588 00:18:30.588 ' 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:30.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.588 --rc genhtml_branch_coverage=1 00:18:30.588 --rc genhtml_function_coverage=1 00:18:30.588 --rc genhtml_legend=1 00:18:30.588 --rc geninfo_all_blocks=1 00:18:30.588 --rc geninfo_unexecuted_blocks=1 00:18:30.588 00:18:30.588 ' 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:30.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.588 --rc genhtml_branch_coverage=1 00:18:30.588 --rc genhtml_function_coverage=1 00:18:30.588 --rc genhtml_legend=1 00:18:30.588 --rc geninfo_all_blocks=1 00:18:30.588 --rc geninfo_unexecuted_blocks=1 00:18:30.588 00:18:30.588 ' 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:30.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:18:30.588 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:35.883 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:35.883 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:35.883 Found net devices under 0000:86:00.0: cvl_0_0 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:35.883 Found net devices under 0000:86:00.1: cvl_0_1 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:35.883 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:35.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:18:35.884 00:18:35.884 --- 10.0.0.2 ping statistics --- 00:18:35.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.884 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:35.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:18:35.884 00:18:35.884 --- 10.0.0.1 ping statistics --- 00:18:35.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.884 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3613611 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3613611 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3613611 ']' 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.884 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:35.884 [2024-12-09 05:13:12.481316] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:18:35.884 [2024-12-09 05:13:12.481359] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.142 [2024-12-09 05:13:12.549448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.142 [2024-12-09 05:13:12.587940] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.142 [2024-12-09 05:13:12.587971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.142 [2024-12-09 05:13:12.587978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.142 [2024-12-09 05:13:12.587984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.142 [2024-12-09 05:13:12.587989] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.142 [2024-12-09 05:13:12.588563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:36.142 [2024-12-09 05:13:12.729193] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:36.142 Malloc0 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.142 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:36.143 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.143 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:36.143 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.143 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:36.143 [2024-12-09 05:13:12.765619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:36.143 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.143 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3613682 00:18:36.143 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3613683 00:18:36.143 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:36.143 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:36.143 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3613684 00:18:36.143 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3613682 00:18:36.143 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:36.401 [2024-12-09 05:13:12.844276] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:36.401 [2024-12-09 05:13:12.844470] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:36.401 [2024-12-09 05:13:12.844658] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:37.335 Initializing NVMe Controllers 00:18:37.335 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:18:37.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:18:37.335 Initialization complete. Launching workers. 00:18:37.335 ======================================================== 00:18:37.335 Latency(us) 00:18:37.335 Device Information : IOPS MiB/s Average min max 00:18:37.335 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3688.00 14.41 270.74 187.00 40866.88 00:18:37.335 ======================================================== 00:18:37.335 Total : 3688.00 14.41 270.74 187.00 40866.88 00:18:37.335 00:18:37.335 Initializing NVMe Controllers 00:18:37.335 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:18:37.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:18:37.335 Initialization complete. Launching workers. 00:18:37.335 ======================================================== 00:18:37.335 Latency(us) 00:18:37.335 Device Information : IOPS MiB/s Average min max 00:18:37.335 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4225.00 16.50 236.30 157.64 432.52 00:18:37.335 ======================================================== 00:18:37.335 Total : 4225.00 16.50 236.30 157.64 432.52 00:18:37.335 00:18:37.335 [2024-12-09 05:13:13.909268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c5ef0 is same with the state(6) to be set 00:18:37.335 Initializing NVMe Controllers 00:18:37.335 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:18:37.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:18:37.335 Initialization complete. Launching workers. 00:18:37.335 ======================================================== 00:18:37.335 Latency(us) 00:18:37.335 Device Information : IOPS MiB/s Average min max 00:18:37.335 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3973.00 15.52 251.27 165.23 497.15 00:18:37.335 ======================================================== 00:18:37.335 Total : 3973.00 15.52 251.27 165.23 497.15 00:18:37.335 00:18:37.335 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3613683 00:18:37.335 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3613684 00:18:37.594 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:37.594 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:18:37.594 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:37.594 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:18:37.594 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:37.594 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:18:37.594 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:37.594 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:37.594 rmmod nvme_tcp 00:18:37.594 rmmod nvme_fabrics 00:18:37.594 rmmod nvme_keyring 00:18:37.594 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:37.594 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:18:37.594 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:18:37.594 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3613611 ']' 00:18:37.594 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3613611 00:18:37.594 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3613611 ']' 00:18:37.594 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3613611 00:18:37.594 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:18:37.594 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.594 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3613611 00:18:37.594 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:37.594 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:37.594 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3613611' 00:18:37.594 killing process with pid 3613611 00:18:37.594 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3613611 00:18:37.594 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3613611 00:18:37.853 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:37.853 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:37.853 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:37.853 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:18:37.853 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:18:37.853 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:37.853 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:18:37.853 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:37.853 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:37.853 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.853 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.853 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.755 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:39.755 00:18:39.755 real 0m9.376s 00:18:39.755 user 0m6.233s 00:18:39.755 sys 0m5.011s 00:18:39.755 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:39.755 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:39.755 ************************************ 00:18:39.755 END TEST nvmf_control_msg_list 00:18:39.755 ************************************ 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:40.014 ************************************ 00:18:40.014 START TEST nvmf_wait_for_buf 00:18:40.014 ************************************ 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:40.014 * Looking for test storage... 00:18:40.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:40.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.014 --rc genhtml_branch_coverage=1 00:18:40.014 --rc genhtml_function_coverage=1 00:18:40.014 --rc genhtml_legend=1 00:18:40.014 --rc geninfo_all_blocks=1 00:18:40.014 --rc geninfo_unexecuted_blocks=1 00:18:40.014 00:18:40.014 ' 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:40.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.014 --rc genhtml_branch_coverage=1 00:18:40.014 --rc genhtml_function_coverage=1 00:18:40.014 --rc genhtml_legend=1 00:18:40.014 --rc geninfo_all_blocks=1 00:18:40.014 --rc geninfo_unexecuted_blocks=1 00:18:40.014 00:18:40.014 ' 00:18:40.014 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:40.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.015 --rc genhtml_branch_coverage=1 00:18:40.015 --rc genhtml_function_coverage=1 00:18:40.015 --rc genhtml_legend=1 00:18:40.015 --rc geninfo_all_blocks=1 00:18:40.015 --rc geninfo_unexecuted_blocks=1 00:18:40.015 00:18:40.015 ' 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:40.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.015 --rc genhtml_branch_coverage=1 00:18:40.015 --rc genhtml_function_coverage=1 00:18:40.015 --rc genhtml_legend=1 00:18:40.015 --rc geninfo_all_blocks=1 00:18:40.015 --rc geninfo_unexecuted_blocks=1 00:18:40.015 00:18:40.015 ' 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:40.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:18:40.015 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:45.286 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:45.286 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:45.286 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:45.287 Found net devices under 0000:86:00.0: cvl_0_0 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:45.287 Found net devices under 0000:86:00.1: cvl_0_1 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:45.287 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:45.546 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:45.546 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:45.546 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:45.546 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:45.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:18:45.546 00:18:45.546 --- 10.0.0.2 ping statistics --- 00:18:45.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.546 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:18:45.546 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:45.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:18:45.546 00:18:45.546 --- 10.0.0.1 ping statistics --- 00:18:45.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.546 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:18:45.546 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.546 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:18:45.546 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:45.546 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.546 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:45.546 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:45.546 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.546 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:45.546 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:45.546 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:18:45.546 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:45.546 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.546 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:45.546 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3617383 00:18:45.546 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3617383 00:18:45.546 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:45.546 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3617383 ']' 00:18:45.546 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.546 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.546 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.546 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.546 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:45.546 [2024-12-09 05:13:22.100587] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:18:45.546 [2024-12-09 05:13:22.100637] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.546 [2024-12-09 05:13:22.169171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.805 [2024-12-09 05:13:22.210210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.805 [2024-12-09 05:13:22.210243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.805 [2024-12-09 05:13:22.210250] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.805 [2024-12-09 05:13:22.210256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.805 [2024-12-09 05:13:22.210262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.805 [2024-12-09 05:13:22.210826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.805 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.805 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:18:45.805 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:45.805 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.805 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:45.805 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.805 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:45.805 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:45.805 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:18:45.805 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.805 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:45.805 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:45.806 Malloc0 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:45.806 [2024-12-09 05:13:22.377571] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:45.806 [2024-12-09 05:13:22.401765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.806 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:46.065 [2024-12-09 05:13:22.479077] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:47.452 Initializing NVMe Controllers 00:18:47.452 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:18:47.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:18:47.452 Initialization complete. Launching workers. 00:18:47.453 ======================================================== 00:18:47.453 Latency(us) 00:18:47.453 Device Information : IOPS MiB/s Average min max 00:18:47.453 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.93 15.99 32366.27 7261.89 63894.35 00:18:47.453 ======================================================== 00:18:47.453 Total : 127.93 15.99 32366.27 7261.89 63894.35 00:18:47.453 00:18:47.453 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:18:47.453 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:18:47.453 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.453 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:47.453 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.453 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:18:47.453 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:18:47.453 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:47.453 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:18:47.453 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:47.453 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:18:47.453 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:47.453 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:18:47.453 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:47.453 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:47.453 rmmod nvme_tcp 00:18:47.453 rmmod nvme_fabrics 00:18:47.453 rmmod nvme_keyring 00:18:47.453 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:47.453 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:18:47.453 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:18:47.453 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3617383 ']' 00:18:47.453 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3617383 00:18:47.453 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3617383 ']' 00:18:47.453 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3617383 00:18:47.453 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:18:47.453 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.453 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3617383 00:18:47.711 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:47.711 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:47.711 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3617383' 00:18:47.711 killing process with pid 3617383 00:18:47.711 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3617383 00:18:47.711 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3617383 00:18:47.711 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:47.711 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:47.711 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:47.711 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:18:47.711 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:18:47.711 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:18:47.711 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:47.711 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:47.711 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:47.711 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.711 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:47.711 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.243 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:50.243 00:18:50.243 real 0m9.924s 00:18:50.243 user 0m3.856s 00:18:50.243 sys 0m4.458s 00:18:50.243 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:50.243 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:50.243 ************************************ 00:18:50.243 END TEST nvmf_wait_for_buf 00:18:50.243 ************************************ 00:18:50.244 05:13:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:18:50.244 05:13:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:18:50.244 05:13:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:18:50.244 05:13:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:18:50.244 05:13:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:18:50.244 05:13:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:55.517 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:55.517 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:18:55.517 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:55.517 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:55.518 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:55.518 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:55.518 Found net devices under 0000:86:00.0: cvl_0_0 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:55.518 Found net devices under 0000:86:00.1: cvl_0_1 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:55.518 ************************************ 00:18:55.518 START TEST nvmf_perf_adq 00:18:55.518 ************************************ 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:55.518 * Looking for test storage... 00:18:55.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.518 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:55.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.518 --rc genhtml_branch_coverage=1 00:18:55.518 --rc genhtml_function_coverage=1 00:18:55.519 --rc genhtml_legend=1 00:18:55.519 --rc geninfo_all_blocks=1 00:18:55.519 --rc geninfo_unexecuted_blocks=1 00:18:55.519 00:18:55.519 ' 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:55.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.519 --rc genhtml_branch_coverage=1 00:18:55.519 --rc genhtml_function_coverage=1 00:18:55.519 --rc genhtml_legend=1 00:18:55.519 --rc geninfo_all_blocks=1 00:18:55.519 --rc geninfo_unexecuted_blocks=1 00:18:55.519 00:18:55.519 ' 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:55.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.519 --rc genhtml_branch_coverage=1 00:18:55.519 --rc genhtml_function_coverage=1 00:18:55.519 --rc genhtml_legend=1 00:18:55.519 --rc geninfo_all_blocks=1 00:18:55.519 --rc geninfo_unexecuted_blocks=1 00:18:55.519 00:18:55.519 ' 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:55.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.519 --rc genhtml_branch_coverage=1 00:18:55.519 --rc genhtml_function_coverage=1 00:18:55.519 --rc genhtml_legend=1 00:18:55.519 --rc geninfo_all_blocks=1 00:18:55.519 --rc geninfo_unexecuted_blocks=1 00:18:55.519 00:18:55.519 ' 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:55.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:18:55.519 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:59.705 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:59.706 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:59.706 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:59.706 Found net devices under 0000:86:00.0: cvl_0_0 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:59.706 Found net devices under 0000:86:00.1: cvl_0_1 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:18:59.706 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:01.080 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:02.981 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:08.249 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:08.250 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:08.250 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:08.250 Found net devices under 0000:86:00.0: cvl_0_0 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:08.250 Found net devices under 0000:86:00.1: cvl_0_1 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:08.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:19:08.250 00:19:08.250 --- 10.0.0.2 ping statistics --- 00:19:08.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.250 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:08.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:08.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:19:08.250 00:19:08.250 --- 10.0.0.1 ping statistics --- 00:19:08.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.250 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3625487 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3625487 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3625487 ']' 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:08.250 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.250 [2024-12-09 05:13:44.763557] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:19:08.250 [2024-12-09 05:13:44.763612] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.250 [2024-12-09 05:13:44.835692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:08.250 [2024-12-09 05:13:44.880195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.250 [2024-12-09 05:13:44.880236] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.250 [2024-12-09 05:13:44.880243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.250 [2024-12-09 05:13:44.880250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.250 [2024-12-09 05:13:44.880255] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.251 [2024-12-09 05:13:44.881793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.251 [2024-12-09 05:13:44.881891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.251 [2024-12-09 05:13:44.881975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:08.251 [2024-12-09 05:13:44.881976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.510 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.510 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:08.510 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:08.510 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:08.510 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.510 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.510 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:19:08.510 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:08.510 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:08.510 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.510 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.510 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.510 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:08.510 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:08.510 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.510 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.510 [2024-12-09 05:13:45.092620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.510 Malloc1 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.510 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.767 [2024-12-09 05:13:45.153742] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.767 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.767 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3625734 00:19:08.767 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:19:08.767 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:10.665 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:19:10.665 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.665 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:10.665 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.665 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:19:10.665 "tick_rate": 2300000000, 00:19:10.665 "poll_groups": [ 00:19:10.665 { 00:19:10.665 "name": "nvmf_tgt_poll_group_000", 00:19:10.665 "admin_qpairs": 1, 00:19:10.665 "io_qpairs": 1, 00:19:10.665 "current_admin_qpairs": 1, 00:19:10.665 "current_io_qpairs": 1, 00:19:10.665 "pending_bdev_io": 0, 00:19:10.665 "completed_nvme_io": 20050, 00:19:10.665 "transports": [ 00:19:10.665 { 00:19:10.665 "trtype": "TCP" 00:19:10.665 } 00:19:10.665 ] 00:19:10.665 }, 00:19:10.665 { 00:19:10.665 "name": "nvmf_tgt_poll_group_001", 00:19:10.665 "admin_qpairs": 0, 00:19:10.665 "io_qpairs": 1, 00:19:10.665 "current_admin_qpairs": 0, 00:19:10.665 "current_io_qpairs": 1, 00:19:10.665 "pending_bdev_io": 0, 00:19:10.665 "completed_nvme_io": 20260, 00:19:10.665 "transports": [ 00:19:10.665 { 00:19:10.665 "trtype": "TCP" 00:19:10.665 } 00:19:10.665 ] 00:19:10.665 }, 00:19:10.665 { 00:19:10.665 "name": "nvmf_tgt_poll_group_002", 00:19:10.665 "admin_qpairs": 0, 00:19:10.665 "io_qpairs": 1, 00:19:10.665 "current_admin_qpairs": 0, 00:19:10.665 "current_io_qpairs": 1, 00:19:10.666 "pending_bdev_io": 0, 00:19:10.666 "completed_nvme_io": 20242, 00:19:10.666 "transports": [ 00:19:10.666 { 00:19:10.666 "trtype": "TCP" 00:19:10.666 } 00:19:10.666 ] 00:19:10.666 }, 00:19:10.666 { 00:19:10.666 "name": "nvmf_tgt_poll_group_003", 00:19:10.666 "admin_qpairs": 0, 00:19:10.666 "io_qpairs": 1, 00:19:10.666 "current_admin_qpairs": 0, 00:19:10.666 "current_io_qpairs": 1, 00:19:10.666 "pending_bdev_io": 0, 00:19:10.666 "completed_nvme_io": 20033, 00:19:10.666 "transports": [ 00:19:10.666 { 00:19:10.666 "trtype": "TCP" 00:19:10.666 } 00:19:10.666 ] 00:19:10.666 } 00:19:10.666 ] 00:19:10.666 }' 00:19:10.666 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:19:10.666 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:10.666 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:19:10.666 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:19:10.666 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3625734 00:19:18.777 Initializing NVMe Controllers 00:19:18.777 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:18.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:18.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:18.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:18.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:18.777 Initialization complete. Launching workers. 00:19:18.777 ======================================================== 00:19:18.777 Latency(us) 00:19:18.777 Device Information : IOPS MiB/s Average min max 00:19:18.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10595.10 41.39 6041.81 1715.84 8970.68 00:19:18.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10725.80 41.90 5966.73 2084.19 10044.27 00:19:18.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10677.70 41.71 5993.42 2236.11 9581.16 00:19:18.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10625.20 41.50 6024.93 2376.32 8898.07 00:19:18.777 ======================================================== 00:19:18.777 Total : 42623.79 166.50 6006.59 1715.84 10044.27 00:19:18.777 00:19:18.777 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:19:18.777 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:18.777 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:19:18.777 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:18.777 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:19:18.777 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:18.777 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:18.777 rmmod nvme_tcp 00:19:18.778 rmmod nvme_fabrics 00:19:18.778 rmmod nvme_keyring 00:19:18.778 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:19.035 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:19:19.035 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:19:19.035 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3625487 ']' 00:19:19.035 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3625487 00:19:19.035 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3625487 ']' 00:19:19.035 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3625487 00:19:19.035 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:19:19.035 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.035 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3625487 00:19:19.035 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:19.035 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:19.035 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3625487' 00:19:19.035 killing process with pid 3625487 00:19:19.035 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3625487 00:19:19.035 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3625487 00:19:19.293 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:19.293 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:19.293 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:19.293 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:19:19.293 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:19:19.293 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:19.293 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:19:19.294 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:19.294 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:19.294 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.294 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:19.294 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.194 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:21.194 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:19:21.194 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:21.194 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:22.571 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:24.477 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:29.760 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:19:29.760 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:29.760 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:29.760 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:29.760 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:29.760 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:29.760 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.760 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.760 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.760 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:29.760 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:29.761 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:29.761 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:29.761 Found net devices under 0000:86:00.0: cvl_0_0 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:29.761 Found net devices under 0000:86:00.1: cvl_0_1 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:29.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:19:29.761 00:19:29.761 --- 10.0.0.2 ping statistics --- 00:19:29.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.761 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:19:29.761 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:29.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:19:29.761 00:19:29.761 --- 10.0.0.1 ping statistics --- 00:19:29.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.762 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:19:29.762 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.762 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:29.762 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:29.762 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.762 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:29.762 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:29.762 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.762 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:29.762 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:29.762 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:19:29.762 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:29.762 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:29.762 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:29.762 net.core.busy_poll = 1 00:19:29.762 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:29.762 net.core.busy_read = 1 00:19:29.762 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:29.762 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:30.020 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:30.020 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:30.020 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:30.020 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:30.020 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:30.020 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:30.020 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.020 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3629516 00:19:30.020 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3629516 00:19:30.020 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:30.020 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3629516 ']' 00:19:30.020 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.020 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.020 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.020 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.020 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.020 [2024-12-09 05:14:06.565573] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:19:30.020 [2024-12-09 05:14:06.565616] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.020 [2024-12-09 05:14:06.635551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:30.278 [2024-12-09 05:14:06.679080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.279 [2024-12-09 05:14:06.679116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.279 [2024-12-09 05:14:06.679124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.279 [2024-12-09 05:14:06.679130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.279 [2024-12-09 05:14:06.679139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.279 [2024-12-09 05:14:06.680657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.279 [2024-12-09 05:14:06.680678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.279 [2024-12-09 05:14:06.680744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:30.279 [2024-12-09 05:14:06.680745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.895 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.895 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:30.895 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:30.895 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:30.895 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.895 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.895 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:19:30.895 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:30.895 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:30.895 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.895 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.895 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.895 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:30.895 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:30.895 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.895 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.895 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.895 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:30.895 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.895 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.221 [2024-12-09 05:14:07.586820] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.221 Malloc1 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.221 [2024-12-09 05:14:07.650861] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3629682 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:19:31.221 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:33.121 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:19:33.121 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.121 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.121 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.121 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:19:33.121 "tick_rate": 2300000000, 00:19:33.121 "poll_groups": [ 00:19:33.121 { 00:19:33.121 "name": "nvmf_tgt_poll_group_000", 00:19:33.121 "admin_qpairs": 1, 00:19:33.121 "io_qpairs": 1, 00:19:33.121 "current_admin_qpairs": 1, 00:19:33.121 "current_io_qpairs": 1, 00:19:33.121 "pending_bdev_io": 0, 00:19:33.121 "completed_nvme_io": 25534, 00:19:33.121 "transports": [ 00:19:33.121 { 00:19:33.121 "trtype": "TCP" 00:19:33.121 } 00:19:33.121 ] 00:19:33.121 }, 00:19:33.121 { 00:19:33.121 "name": "nvmf_tgt_poll_group_001", 00:19:33.121 "admin_qpairs": 0, 00:19:33.121 "io_qpairs": 3, 00:19:33.121 "current_admin_qpairs": 0, 00:19:33.121 "current_io_qpairs": 3, 00:19:33.121 "pending_bdev_io": 0, 00:19:33.121 "completed_nvme_io": 27661, 00:19:33.121 "transports": [ 00:19:33.121 { 00:19:33.121 "trtype": "TCP" 00:19:33.121 } 00:19:33.121 ] 00:19:33.121 }, 00:19:33.121 { 00:19:33.121 "name": "nvmf_tgt_poll_group_002", 00:19:33.121 "admin_qpairs": 0, 00:19:33.121 "io_qpairs": 0, 00:19:33.121 "current_admin_qpairs": 0, 00:19:33.121 "current_io_qpairs": 0, 00:19:33.121 "pending_bdev_io": 0, 00:19:33.121 "completed_nvme_io": 0, 00:19:33.121 "transports": [ 00:19:33.121 { 00:19:33.121 "trtype": "TCP" 00:19:33.121 } 00:19:33.121 ] 00:19:33.121 }, 00:19:33.121 { 00:19:33.121 "name": "nvmf_tgt_poll_group_003", 00:19:33.121 "admin_qpairs": 0, 00:19:33.121 "io_qpairs": 0, 00:19:33.121 "current_admin_qpairs": 0, 00:19:33.121 "current_io_qpairs": 0, 00:19:33.121 "pending_bdev_io": 0, 00:19:33.121 "completed_nvme_io": 0, 00:19:33.121 "transports": [ 00:19:33.121 { 00:19:33.121 "trtype": "TCP" 00:19:33.121 } 00:19:33.121 ] 00:19:33.121 } 00:19:33.121 ] 00:19:33.121 }' 00:19:33.121 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:33.121 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:19:33.121 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:19:33.121 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:19:33.121 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3629682 00:19:43.086 Initializing NVMe Controllers 00:19:43.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:43.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:43.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:43.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:43.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:43.086 Initialization complete. Launching workers. 00:19:43.086 ======================================================== 00:19:43.086 Latency(us) 00:19:43.086 Device Information : IOPS MiB/s Average min max 00:19:43.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5343.80 20.87 11976.98 1582.84 60393.33 00:19:43.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14238.60 55.62 4494.23 1312.22 45927.35 00:19:43.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4766.20 18.62 13482.35 1738.45 59574.75 00:19:43.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5245.60 20.49 12201.29 1736.79 59150.41 00:19:43.086 ======================================================== 00:19:43.086 Total : 29594.20 115.60 8659.02 1312.22 60393.33 00:19:43.086 00:19:43.086 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:19:43.086 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:43.086 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:19:43.086 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:43.086 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:19:43.086 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:43.086 05:14:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:43.086 rmmod nvme_tcp 00:19:43.086 rmmod nvme_fabrics 00:19:43.086 rmmod nvme_keyring 00:19:43.086 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:43.086 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:19:43.086 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:19:43.086 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3629516 ']' 00:19:43.086 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3629516 00:19:43.086 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3629516 ']' 00:19:43.086 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3629516 00:19:43.086 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:19:43.087 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.087 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3629516 00:19:43.087 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:43.087 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:43.087 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3629516' 00:19:43.087 killing process with pid 3629516 00:19:43.087 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3629516 00:19:43.087 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3629516 00:19:43.087 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:43.087 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:43.087 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:43.087 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:19:43.087 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:19:43.087 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:19:43.087 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:43.087 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:43.087 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:43.087 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.087 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.087 05:14:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:19:44.024 00:19:44.024 real 0m49.027s 00:19:44.024 user 2m47.461s 00:19:44.024 sys 0m9.570s 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.024 ************************************ 00:19:44.024 END TEST nvmf_perf_adq 00:19:44.024 ************************************ 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:44.024 ************************************ 00:19:44.024 START TEST nvmf_shutdown 00:19:44.024 ************************************ 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:44.024 * Looking for test storage... 00:19:44.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:44.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.024 --rc genhtml_branch_coverage=1 00:19:44.024 --rc genhtml_function_coverage=1 00:19:44.024 --rc genhtml_legend=1 00:19:44.024 --rc geninfo_all_blocks=1 00:19:44.024 --rc geninfo_unexecuted_blocks=1 00:19:44.024 00:19:44.024 ' 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:44.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.024 --rc genhtml_branch_coverage=1 00:19:44.024 --rc genhtml_function_coverage=1 00:19:44.024 --rc genhtml_legend=1 00:19:44.024 --rc geninfo_all_blocks=1 00:19:44.024 --rc geninfo_unexecuted_blocks=1 00:19:44.024 00:19:44.024 ' 00:19:44.024 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:44.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.024 --rc genhtml_branch_coverage=1 00:19:44.024 --rc genhtml_function_coverage=1 00:19:44.024 --rc genhtml_legend=1 00:19:44.024 --rc geninfo_all_blocks=1 00:19:44.024 --rc geninfo_unexecuted_blocks=1 00:19:44.024 00:19:44.024 ' 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:44.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.283 --rc genhtml_branch_coverage=1 00:19:44.283 --rc genhtml_function_coverage=1 00:19:44.283 --rc genhtml_legend=1 00:19:44.283 --rc geninfo_all_blocks=1 00:19:44.283 --rc geninfo_unexecuted_blocks=1 00:19:44.283 00:19:44.283 ' 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:44.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:44.283 ************************************ 00:19:44.283 START TEST nvmf_shutdown_tc1 00:19:44.283 ************************************ 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:44.283 05:14:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:50.848 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:50.848 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.848 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:50.849 Found net devices under 0000:86:00.0: cvl_0_0 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:50.849 Found net devices under 0000:86:00.1: cvl_0_1 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:50.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:50.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:19:50.849 00:19:50.849 --- 10.0.0.2 ping statistics --- 00:19:50.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.849 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:50.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:50.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:19:50.849 00:19:50.849 --- 10.0.0.1 ping statistics --- 00:19:50.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.849 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3634994 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3634994 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3634994 ']' 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:50.849 [2024-12-09 05:14:26.645238] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:19:50.849 [2024-12-09 05:14:26.645292] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.849 [2024-12-09 05:14:26.716530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:50.849 [2024-12-09 05:14:26.761173] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.849 [2024-12-09 05:14:26.761211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.849 [2024-12-09 05:14:26.761219] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.849 [2024-12-09 05:14:26.761225] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.849 [2024-12-09 05:14:26.761230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.849 [2024-12-09 05:14:26.762866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.849 [2024-12-09 05:14:26.762956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:50.849 [2024-12-09 05:14:26.763043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:50.849 [2024-12-09 05:14:26.763044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.849 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:50.850 [2024-12-09 05:14:26.910020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.850 05:14:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:50.850 Malloc1 00:19:50.850 [2024-12-09 05:14:27.019996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.850 Malloc2 00:19:50.850 Malloc3 00:19:50.850 Malloc4 00:19:50.850 Malloc5 00:19:50.850 Malloc6 00:19:50.850 Malloc7 00:19:50.850 Malloc8 00:19:50.850 Malloc9 00:19:50.850 Malloc10 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3635127 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3635127 /var/tmp/bdevperf.sock 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3635127 ']' 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:50.850 { 00:19:50.850 "params": { 00:19:50.850 "name": "Nvme$subsystem", 00:19:50.850 "trtype": "$TEST_TRANSPORT", 00:19:50.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.850 "adrfam": "ipv4", 00:19:50.850 "trsvcid": "$NVMF_PORT", 00:19:50.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.850 "hdgst": ${hdgst:-false}, 00:19:50.850 "ddgst": ${ddgst:-false} 00:19:50.850 }, 00:19:50.850 "method": "bdev_nvme_attach_controller" 00:19:50.850 } 00:19:50.850 EOF 00:19:50.850 )") 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:50.850 { 00:19:50.850 "params": { 00:19:50.850 "name": "Nvme$subsystem", 00:19:50.850 "trtype": "$TEST_TRANSPORT", 00:19:50.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.850 "adrfam": "ipv4", 00:19:50.850 "trsvcid": "$NVMF_PORT", 00:19:50.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.850 "hdgst": ${hdgst:-false}, 00:19:50.850 "ddgst": ${ddgst:-false} 00:19:50.850 }, 00:19:50.850 "method": "bdev_nvme_attach_controller" 00:19:50.850 } 00:19:50.850 EOF 00:19:50.850 )") 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:50.850 { 00:19:50.850 "params": { 00:19:50.850 "name": "Nvme$subsystem", 00:19:50.850 "trtype": "$TEST_TRANSPORT", 00:19:50.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.850 "adrfam": "ipv4", 00:19:50.850 "trsvcid": "$NVMF_PORT", 00:19:50.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.850 "hdgst": ${hdgst:-false}, 00:19:50.850 "ddgst": ${ddgst:-false} 00:19:50.850 }, 00:19:50.850 "method": "bdev_nvme_attach_controller" 00:19:50.850 } 00:19:50.850 EOF 00:19:50.850 )") 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:50.850 { 00:19:50.850 "params": { 00:19:50.850 "name": "Nvme$subsystem", 00:19:50.850 "trtype": "$TEST_TRANSPORT", 00:19:50.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.850 "adrfam": "ipv4", 00:19:50.850 "trsvcid": "$NVMF_PORT", 00:19:50.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.850 "hdgst": ${hdgst:-false}, 00:19:50.850 "ddgst": ${ddgst:-false} 00:19:50.850 }, 00:19:50.850 "method": "bdev_nvme_attach_controller" 00:19:50.850 } 00:19:50.850 EOF 00:19:50.850 )") 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:50.850 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:50.850 { 00:19:50.850 "params": { 00:19:50.850 "name": "Nvme$subsystem", 00:19:50.851 "trtype": "$TEST_TRANSPORT", 00:19:50.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.851 "adrfam": "ipv4", 00:19:50.851 "trsvcid": "$NVMF_PORT", 00:19:50.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.851 "hdgst": ${hdgst:-false}, 00:19:50.851 "ddgst": ${ddgst:-false} 00:19:50.851 }, 00:19:50.851 "method": "bdev_nvme_attach_controller" 00:19:50.851 } 00:19:50.851 EOF 00:19:50.851 )") 00:19:50.851 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:50.851 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:50.851 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:50.851 { 00:19:50.851 "params": { 00:19:50.851 "name": "Nvme$subsystem", 00:19:50.851 "trtype": "$TEST_TRANSPORT", 00:19:50.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.851 "adrfam": "ipv4", 00:19:50.851 "trsvcid": "$NVMF_PORT", 00:19:50.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.851 "hdgst": ${hdgst:-false}, 00:19:50.851 "ddgst": ${ddgst:-false} 00:19:50.851 }, 00:19:50.851 "method": "bdev_nvme_attach_controller" 00:19:50.851 } 00:19:50.851 EOF 00:19:50.851 )") 00:19:50.851 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:50.851 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:50.851 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:50.851 { 00:19:50.851 "params": { 00:19:50.851 "name": "Nvme$subsystem", 00:19:50.851 "trtype": "$TEST_TRANSPORT", 00:19:50.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.851 "adrfam": "ipv4", 00:19:50.851 "trsvcid": "$NVMF_PORT", 00:19:50.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.851 "hdgst": ${hdgst:-false}, 00:19:50.851 "ddgst": ${ddgst:-false} 00:19:50.851 }, 00:19:50.851 "method": "bdev_nvme_attach_controller" 00:19:50.851 } 00:19:50.851 EOF 00:19:50.851 )") 00:19:50.851 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:50.851 [2024-12-09 05:14:27.490276] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:19:50.851 [2024-12-09 05:14:27.490325] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:51.110 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:51.110 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:51.110 { 00:19:51.110 "params": { 00:19:51.110 "name": "Nvme$subsystem", 00:19:51.110 "trtype": "$TEST_TRANSPORT", 00:19:51.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.110 "adrfam": "ipv4", 00:19:51.110 "trsvcid": "$NVMF_PORT", 00:19:51.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.110 "hdgst": ${hdgst:-false}, 00:19:51.110 "ddgst": ${ddgst:-false} 00:19:51.110 }, 00:19:51.110 "method": "bdev_nvme_attach_controller" 00:19:51.110 } 00:19:51.110 EOF 00:19:51.110 )") 00:19:51.110 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:51.110 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:51.110 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:51.110 { 00:19:51.110 "params": { 00:19:51.110 "name": "Nvme$subsystem", 00:19:51.110 "trtype": "$TEST_TRANSPORT", 00:19:51.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.110 "adrfam": "ipv4", 00:19:51.110 "trsvcid": "$NVMF_PORT", 00:19:51.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.110 "hdgst": ${hdgst:-false}, 00:19:51.110 "ddgst": ${ddgst:-false} 00:19:51.110 }, 00:19:51.110 "method": "bdev_nvme_attach_controller" 00:19:51.110 } 00:19:51.110 EOF 00:19:51.110 )") 00:19:51.110 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:51.110 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:51.110 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:51.110 { 00:19:51.110 "params": { 00:19:51.110 "name": "Nvme$subsystem", 00:19:51.110 "trtype": "$TEST_TRANSPORT", 00:19:51.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.110 "adrfam": "ipv4", 00:19:51.110 "trsvcid": "$NVMF_PORT", 00:19:51.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.110 "hdgst": ${hdgst:-false}, 00:19:51.110 "ddgst": ${ddgst:-false} 00:19:51.110 }, 00:19:51.110 "method": "bdev_nvme_attach_controller" 00:19:51.110 } 00:19:51.110 EOF 00:19:51.110 )") 00:19:51.110 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:51.110 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:19:51.110 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:19:51.110 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:51.110 "params": { 00:19:51.110 "name": "Nvme1", 00:19:51.110 "trtype": "tcp", 00:19:51.110 "traddr": "10.0.0.2", 00:19:51.110 "adrfam": "ipv4", 00:19:51.110 "trsvcid": "4420", 00:19:51.110 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.110 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.110 "hdgst": false, 00:19:51.110 "ddgst": false 00:19:51.110 }, 00:19:51.110 "method": "bdev_nvme_attach_controller" 00:19:51.110 },{ 00:19:51.110 "params": { 00:19:51.110 "name": "Nvme2", 00:19:51.110 "trtype": "tcp", 00:19:51.110 "traddr": "10.0.0.2", 00:19:51.110 "adrfam": "ipv4", 00:19:51.110 "trsvcid": "4420", 00:19:51.110 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:51.110 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:51.110 "hdgst": false, 00:19:51.110 "ddgst": false 00:19:51.110 }, 00:19:51.110 "method": "bdev_nvme_attach_controller" 00:19:51.110 },{ 00:19:51.110 "params": { 00:19:51.110 "name": "Nvme3", 00:19:51.110 "trtype": "tcp", 00:19:51.110 "traddr": "10.0.0.2", 00:19:51.110 "adrfam": "ipv4", 00:19:51.110 "trsvcid": "4420", 00:19:51.110 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:51.110 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:51.110 "hdgst": false, 00:19:51.110 "ddgst": false 00:19:51.110 }, 00:19:51.110 "method": "bdev_nvme_attach_controller" 00:19:51.110 },{ 00:19:51.110 "params": { 00:19:51.110 "name": "Nvme4", 00:19:51.110 "trtype": "tcp", 00:19:51.110 "traddr": "10.0.0.2", 00:19:51.110 "adrfam": "ipv4", 00:19:51.110 "trsvcid": "4420", 00:19:51.110 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:51.110 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:51.110 "hdgst": false, 00:19:51.110 "ddgst": false 00:19:51.110 }, 00:19:51.110 "method": "bdev_nvme_attach_controller" 00:19:51.110 },{ 00:19:51.110 "params": { 00:19:51.110 "name": "Nvme5", 00:19:51.110 "trtype": "tcp", 00:19:51.110 "traddr": "10.0.0.2", 00:19:51.110 "adrfam": "ipv4", 00:19:51.110 "trsvcid": "4420", 00:19:51.110 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:51.110 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:51.110 "hdgst": false, 00:19:51.110 "ddgst": false 00:19:51.110 }, 00:19:51.110 "method": "bdev_nvme_attach_controller" 00:19:51.110 },{ 00:19:51.110 "params": { 00:19:51.110 "name": "Nvme6", 00:19:51.110 "trtype": "tcp", 00:19:51.110 "traddr": "10.0.0.2", 00:19:51.110 "adrfam": "ipv4", 00:19:51.110 "trsvcid": "4420", 00:19:51.110 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:51.110 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:51.110 "hdgst": false, 00:19:51.110 "ddgst": false 00:19:51.110 }, 00:19:51.110 "method": "bdev_nvme_attach_controller" 00:19:51.110 },{ 00:19:51.110 "params": { 00:19:51.110 "name": "Nvme7", 00:19:51.110 "trtype": "tcp", 00:19:51.110 "traddr": "10.0.0.2", 00:19:51.111 "adrfam": "ipv4", 00:19:51.111 "trsvcid": "4420", 00:19:51.111 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:51.111 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:51.111 "hdgst": false, 00:19:51.111 "ddgst": false 00:19:51.111 }, 00:19:51.111 "method": "bdev_nvme_attach_controller" 00:19:51.111 },{ 00:19:51.111 "params": { 00:19:51.111 "name": "Nvme8", 00:19:51.111 "trtype": "tcp", 00:19:51.111 "traddr": "10.0.0.2", 00:19:51.111 "adrfam": "ipv4", 00:19:51.111 "trsvcid": "4420", 00:19:51.111 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:51.111 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:51.111 "hdgst": false, 00:19:51.111 "ddgst": false 00:19:51.111 }, 00:19:51.111 "method": "bdev_nvme_attach_controller" 00:19:51.111 },{ 00:19:51.111 "params": { 00:19:51.111 "name": "Nvme9", 00:19:51.111 "trtype": "tcp", 00:19:51.111 "traddr": "10.0.0.2", 00:19:51.111 "adrfam": "ipv4", 00:19:51.111 "trsvcid": "4420", 00:19:51.111 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:51.111 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:51.111 "hdgst": false, 00:19:51.111 "ddgst": false 00:19:51.111 }, 00:19:51.111 "method": "bdev_nvme_attach_controller" 00:19:51.111 },{ 00:19:51.111 "params": { 00:19:51.111 "name": "Nvme10", 00:19:51.111 "trtype": "tcp", 00:19:51.111 "traddr": "10.0.0.2", 00:19:51.111 "adrfam": "ipv4", 00:19:51.111 "trsvcid": "4420", 00:19:51.111 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:51.111 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:51.111 "hdgst": false, 00:19:51.111 "ddgst": false 00:19:51.111 }, 00:19:51.111 "method": "bdev_nvme_attach_controller" 00:19:51.111 }' 00:19:51.111 [2024-12-09 05:14:27.558179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.111 [2024-12-09 05:14:27.601897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.013 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.013 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:19:53.013 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:53.013 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.013 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:53.013 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.013 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3635127 00:19:53.013 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:19:53.013 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:19:53.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3635127 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3634994 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:53.949 { 00:19:53.949 "params": { 00:19:53.949 "name": "Nvme$subsystem", 00:19:53.949 "trtype": "$TEST_TRANSPORT", 00:19:53.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.949 "adrfam": "ipv4", 00:19:53.949 "trsvcid": "$NVMF_PORT", 00:19:53.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.949 "hdgst": ${hdgst:-false}, 00:19:53.949 "ddgst": ${ddgst:-false} 00:19:53.949 }, 00:19:53.949 "method": "bdev_nvme_attach_controller" 00:19:53.949 } 00:19:53.949 EOF 00:19:53.949 )") 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:53.949 { 00:19:53.949 "params": { 00:19:53.949 "name": "Nvme$subsystem", 00:19:53.949 "trtype": "$TEST_TRANSPORT", 00:19:53.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.949 "adrfam": "ipv4", 00:19:53.949 "trsvcid": "$NVMF_PORT", 00:19:53.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.949 "hdgst": ${hdgst:-false}, 00:19:53.949 "ddgst": ${ddgst:-false} 00:19:53.949 }, 00:19:53.949 "method": "bdev_nvme_attach_controller" 00:19:53.949 } 00:19:53.949 EOF 00:19:53.949 )") 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:53.949 { 00:19:53.949 "params": { 00:19:53.949 "name": "Nvme$subsystem", 00:19:53.949 "trtype": "$TEST_TRANSPORT", 00:19:53.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.949 "adrfam": "ipv4", 00:19:53.949 "trsvcid": "$NVMF_PORT", 00:19:53.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.949 "hdgst": ${hdgst:-false}, 00:19:53.949 "ddgst": ${ddgst:-false} 00:19:53.949 }, 00:19:53.949 "method": "bdev_nvme_attach_controller" 00:19:53.949 } 00:19:53.949 EOF 00:19:53.949 )") 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:53.949 { 00:19:53.949 "params": { 00:19:53.949 "name": "Nvme$subsystem", 00:19:53.949 "trtype": "$TEST_TRANSPORT", 00:19:53.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.949 "adrfam": "ipv4", 00:19:53.949 "trsvcid": "$NVMF_PORT", 00:19:53.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.949 "hdgst": ${hdgst:-false}, 00:19:53.949 "ddgst": ${ddgst:-false} 00:19:53.949 }, 00:19:53.949 "method": "bdev_nvme_attach_controller" 00:19:53.949 } 00:19:53.949 EOF 00:19:53.949 )") 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:53.949 { 00:19:53.949 "params": { 00:19:53.949 "name": "Nvme$subsystem", 00:19:53.949 "trtype": "$TEST_TRANSPORT", 00:19:53.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.949 "adrfam": "ipv4", 00:19:53.949 "trsvcid": "$NVMF_PORT", 00:19:53.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.949 "hdgst": ${hdgst:-false}, 00:19:53.949 "ddgst": ${ddgst:-false} 00:19:53.949 }, 00:19:53.949 "method": "bdev_nvme_attach_controller" 00:19:53.949 } 00:19:53.949 EOF 00:19:53.949 )") 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:53.949 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:53.949 { 00:19:53.949 "params": { 00:19:53.950 "name": "Nvme$subsystem", 00:19:53.950 "trtype": "$TEST_TRANSPORT", 00:19:53.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.950 "adrfam": "ipv4", 00:19:53.950 "trsvcid": "$NVMF_PORT", 00:19:53.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.950 "hdgst": ${hdgst:-false}, 00:19:53.950 "ddgst": ${ddgst:-false} 00:19:53.950 }, 00:19:53.950 "method": "bdev_nvme_attach_controller" 00:19:53.950 } 00:19:53.950 EOF 00:19:53.950 )") 00:19:53.950 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:53.950 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:53.950 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:53.950 { 00:19:53.950 "params": { 00:19:53.950 "name": "Nvme$subsystem", 00:19:53.950 "trtype": "$TEST_TRANSPORT", 00:19:53.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.950 "adrfam": "ipv4", 00:19:53.950 "trsvcid": "$NVMF_PORT", 00:19:53.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.950 "hdgst": ${hdgst:-false}, 00:19:53.950 "ddgst": ${ddgst:-false} 00:19:53.950 }, 00:19:53.950 "method": "bdev_nvme_attach_controller" 00:19:53.950 } 00:19:53.950 EOF 00:19:53.950 )") 00:19:53.950 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:53.950 [2024-12-09 05:14:30.425333] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:19:53.950 [2024-12-09 05:14:30.425383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635679 ] 00:19:53.950 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:53.950 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:53.950 { 00:19:53.950 "params": { 00:19:53.950 "name": "Nvme$subsystem", 00:19:53.950 "trtype": "$TEST_TRANSPORT", 00:19:53.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.950 "adrfam": "ipv4", 00:19:53.950 "trsvcid": "$NVMF_PORT", 00:19:53.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.950 "hdgst": ${hdgst:-false}, 00:19:53.950 "ddgst": ${ddgst:-false} 00:19:53.950 }, 00:19:53.950 "method": "bdev_nvme_attach_controller" 00:19:53.950 } 00:19:53.950 EOF 00:19:53.950 )") 00:19:53.950 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:53.950 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:53.950 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:53.950 { 00:19:53.950 "params": { 00:19:53.950 "name": "Nvme$subsystem", 00:19:53.950 "trtype": "$TEST_TRANSPORT", 00:19:53.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.950 "adrfam": "ipv4", 00:19:53.950 "trsvcid": "$NVMF_PORT", 00:19:53.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.950 "hdgst": ${hdgst:-false}, 00:19:53.950 "ddgst": ${ddgst:-false} 00:19:53.950 }, 00:19:53.950 "method": "bdev_nvme_attach_controller" 00:19:53.950 } 00:19:53.950 EOF 00:19:53.950 )") 00:19:53.950 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:53.950 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:53.950 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:53.950 { 00:19:53.950 "params": { 00:19:53.950 "name": "Nvme$subsystem", 00:19:53.950 "trtype": "$TEST_TRANSPORT", 00:19:53.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.950 "adrfam": "ipv4", 00:19:53.950 "trsvcid": "$NVMF_PORT", 00:19:53.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.950 "hdgst": ${hdgst:-false}, 00:19:53.950 "ddgst": ${ddgst:-false} 00:19:53.950 }, 00:19:53.950 "method": "bdev_nvme_attach_controller" 00:19:53.950 } 00:19:53.950 EOF 00:19:53.950 )") 00:19:53.950 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:53.950 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:19:53.950 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:19:53.950 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:53.950 "params": { 00:19:53.950 "name": "Nvme1", 00:19:53.950 "trtype": "tcp", 00:19:53.950 "traddr": "10.0.0.2", 00:19:53.950 "adrfam": "ipv4", 00:19:53.950 "trsvcid": "4420", 00:19:53.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.950 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:53.950 "hdgst": false, 00:19:53.950 "ddgst": false 00:19:53.950 }, 00:19:53.950 "method": "bdev_nvme_attach_controller" 00:19:53.950 },{ 00:19:53.950 "params": { 00:19:53.950 "name": "Nvme2", 00:19:53.950 "trtype": "tcp", 00:19:53.950 "traddr": "10.0.0.2", 00:19:53.950 "adrfam": "ipv4", 00:19:53.950 "trsvcid": "4420", 00:19:53.950 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:53.950 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:53.950 "hdgst": false, 00:19:53.950 "ddgst": false 00:19:53.950 }, 00:19:53.950 "method": "bdev_nvme_attach_controller" 00:19:53.950 },{ 00:19:53.950 "params": { 00:19:53.950 "name": "Nvme3", 00:19:53.950 "trtype": "tcp", 00:19:53.950 "traddr": "10.0.0.2", 00:19:53.950 "adrfam": "ipv4", 00:19:53.950 "trsvcid": "4420", 00:19:53.950 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:53.950 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:53.950 "hdgst": false, 00:19:53.950 "ddgst": false 00:19:53.950 }, 00:19:53.950 "method": "bdev_nvme_attach_controller" 00:19:53.950 },{ 00:19:53.950 "params": { 00:19:53.950 "name": "Nvme4", 00:19:53.950 "trtype": "tcp", 00:19:53.950 "traddr": "10.0.0.2", 00:19:53.950 "adrfam": "ipv4", 00:19:53.950 "trsvcid": "4420", 00:19:53.950 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:53.950 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:53.950 "hdgst": false, 00:19:53.950 "ddgst": false 00:19:53.950 }, 00:19:53.950 "method": "bdev_nvme_attach_controller" 00:19:53.950 },{ 00:19:53.950 "params": { 00:19:53.950 "name": "Nvme5", 00:19:53.950 "trtype": "tcp", 00:19:53.950 "traddr": "10.0.0.2", 00:19:53.950 "adrfam": "ipv4", 00:19:53.950 "trsvcid": "4420", 00:19:53.950 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:53.950 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:53.950 "hdgst": false, 00:19:53.950 "ddgst": false 00:19:53.950 }, 00:19:53.950 "method": "bdev_nvme_attach_controller" 00:19:53.950 },{ 00:19:53.950 "params": { 00:19:53.950 "name": "Nvme6", 00:19:53.950 "trtype": "tcp", 00:19:53.950 "traddr": "10.0.0.2", 00:19:53.950 "adrfam": "ipv4", 00:19:53.950 "trsvcid": "4420", 00:19:53.950 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:53.950 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:53.950 "hdgst": false, 00:19:53.950 "ddgst": false 00:19:53.950 }, 00:19:53.950 "method": "bdev_nvme_attach_controller" 00:19:53.950 },{ 00:19:53.950 "params": { 00:19:53.950 "name": "Nvme7", 00:19:53.950 "trtype": "tcp", 00:19:53.950 "traddr": "10.0.0.2", 00:19:53.950 "adrfam": "ipv4", 00:19:53.950 "trsvcid": "4420", 00:19:53.950 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:53.950 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:53.950 "hdgst": false, 00:19:53.950 "ddgst": false 00:19:53.950 }, 00:19:53.950 "method": "bdev_nvme_attach_controller" 00:19:53.950 },{ 00:19:53.950 "params": { 00:19:53.950 "name": "Nvme8", 00:19:53.950 "trtype": "tcp", 00:19:53.950 "traddr": "10.0.0.2", 00:19:53.950 "adrfam": "ipv4", 00:19:53.950 "trsvcid": "4420", 00:19:53.950 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:53.950 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:53.950 "hdgst": false, 00:19:53.950 "ddgst": false 00:19:53.950 }, 00:19:53.950 "method": "bdev_nvme_attach_controller" 00:19:53.950 },{ 00:19:53.950 "params": { 00:19:53.950 "name": "Nvme9", 00:19:53.950 "trtype": "tcp", 00:19:53.950 "traddr": "10.0.0.2", 00:19:53.950 "adrfam": "ipv4", 00:19:53.950 "trsvcid": "4420", 00:19:53.950 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:53.950 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:53.950 "hdgst": false, 00:19:53.950 "ddgst": false 00:19:53.950 }, 00:19:53.950 "method": "bdev_nvme_attach_controller" 00:19:53.950 },{ 00:19:53.950 "params": { 00:19:53.950 "name": "Nvme10", 00:19:53.950 "trtype": "tcp", 00:19:53.950 "traddr": "10.0.0.2", 00:19:53.950 "adrfam": "ipv4", 00:19:53.950 "trsvcid": "4420", 00:19:53.950 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:53.950 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:53.950 "hdgst": false, 00:19:53.950 "ddgst": false 00:19:53.950 }, 00:19:53.950 "method": "bdev_nvme_attach_controller" 00:19:53.950 }' 00:19:53.951 [2024-12-09 05:14:30.494198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.951 [2024-12-09 05:14:30.535897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.335 Running I/O for 1 seconds... 00:19:56.710 2207.00 IOPS, 137.94 MiB/s 00:19:56.710 Latency(us) 00:19:56.710 [2024-12-09T04:14:33.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.710 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.710 Verification LBA range: start 0x0 length 0x400 00:19:56.710 Nvme1n1 : 1.14 280.21 17.51 0.00 0.00 226287.08 18578.03 238892.97 00:19:56.710 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.710 Verification LBA range: start 0x0 length 0x400 00:19:56.710 Nvme2n1 : 1.15 278.41 17.40 0.00 0.00 223396.20 13848.04 223392.28 00:19:56.710 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.710 Verification LBA range: start 0x0 length 0x400 00:19:56.710 Nvme3n1 : 1.13 282.14 17.63 0.00 0.00 218322.14 15386.71 217921.45 00:19:56.710 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.710 Verification LBA range: start 0x0 length 0x400 00:19:56.710 Nvme4n1 : 1.06 245.68 15.36 0.00 0.00 245125.08 3034.60 232510.33 00:19:56.710 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.710 Verification LBA range: start 0x0 length 0x400 00:19:56.710 Nvme5n1 : 1.11 231.48 14.47 0.00 0.00 257785.32 16184.54 253481.85 00:19:56.710 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.710 Verification LBA range: start 0x0 length 0x400 00:19:56.710 Nvme6n1 : 1.16 276.66 17.29 0.00 0.00 212304.85 8320.22 224304.08 00:19:56.710 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.710 Verification LBA range: start 0x0 length 0x400 00:19:56.710 Nvme7n1 : 1.15 279.08 17.44 0.00 0.00 208131.65 20971.52 217009.64 00:19:56.710 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.710 Verification LBA range: start 0x0 length 0x400 00:19:56.710 Nvme8n1 : 1.14 284.82 17.80 0.00 0.00 199680.07 6838.54 206067.98 00:19:56.710 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.710 Verification LBA range: start 0x0 length 0x400 00:19:56.710 Nvme9n1 : 1.16 278.06 17.38 0.00 0.00 202603.84 1510.18 220656.86 00:19:56.710 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:56.710 Verification LBA range: start 0x0 length 0x400 00:19:56.710 Nvme10n1 : 1.16 275.42 17.21 0.00 0.00 201692.74 16184.54 232510.33 00:19:56.710 [2024-12-09T04:14:33.356Z] =================================================================================================================== 00:19:56.710 [2024-12-09T04:14:33.356Z] Total : 2711.97 169.50 0.00 0.00 218203.58 1510.18 253481.85 00:19:56.710 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:19:56.710 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:19:56.710 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:56.710 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:56.710 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:19:56.710 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:56.710 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:19:56.710 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:56.710 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:19:56.710 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:56.710 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:56.710 rmmod nvme_tcp 00:19:56.711 rmmod nvme_fabrics 00:19:56.711 rmmod nvme_keyring 00:19:56.970 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:56.970 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:19:56.970 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:19:56.970 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3634994 ']' 00:19:56.970 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3634994 00:19:56.970 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3634994 ']' 00:19:56.970 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3634994 00:19:56.970 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:19:56.970 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.970 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3634994 00:19:56.970 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:56.970 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:56.970 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3634994' 00:19:56.970 killing process with pid 3634994 00:19:56.970 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3634994 00:19:56.970 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3634994 00:19:57.229 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:57.229 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:57.229 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:57.229 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:19:57.229 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:19:57.229 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:57.229 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:19:57.229 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:57.229 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:57.229 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.229 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:57.229 05:14:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:59.766 00:19:59.766 real 0m15.160s 00:19:59.766 user 0m33.970s 00:19:59.766 sys 0m5.713s 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:59.766 ************************************ 00:19:59.766 END TEST nvmf_shutdown_tc1 00:19:59.766 ************************************ 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:59.766 ************************************ 00:19:59.766 START TEST nvmf_shutdown_tc2 00:19:59.766 ************************************ 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:59.766 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:59.766 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.766 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:59.767 Found net devices under 0000:86:00.0: cvl_0_0 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:59.767 Found net devices under 0000:86:00.1: cvl_0_1 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:59.767 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:59.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:19:59.767 00:19:59.767 --- 10.0.0.2 ping statistics --- 00:19:59.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.767 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:59.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:19:59.767 00:19:59.767 --- 10.0.0.1 ping statistics --- 00:19:59.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.767 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3636782 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3636782 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3636782 ']' 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.767 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:59.767 [2024-12-09 05:14:36.334602] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:19:59.767 [2024-12-09 05:14:36.334649] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.767 [2024-12-09 05:14:36.405179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:00.027 [2024-12-09 05:14:36.449835] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.027 [2024-12-09 05:14:36.449872] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.027 [2024-12-09 05:14:36.449880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.027 [2024-12-09 05:14:36.449886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.027 [2024-12-09 05:14:36.449892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.027 [2024-12-09 05:14:36.451365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.027 [2024-12-09 05:14:36.451429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:00.027 [2024-12-09 05:14:36.451515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.027 [2024-12-09 05:14:36.451516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:00.027 [2024-12-09 05:14:36.598699] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.027 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:00.286 Malloc1 00:20:00.286 [2024-12-09 05:14:36.712489] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.286 Malloc2 00:20:00.286 Malloc3 00:20:00.286 Malloc4 00:20:00.286 Malloc5 00:20:00.286 Malloc6 00:20:00.545 Malloc7 00:20:00.546 Malloc8 00:20:00.546 Malloc9 00:20:00.546 Malloc10 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3636837 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3636837 /var/tmp/bdevperf.sock 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3636837 ']' 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:00.546 { 00:20:00.546 "params": { 00:20:00.546 "name": "Nvme$subsystem", 00:20:00.546 "trtype": "$TEST_TRANSPORT", 00:20:00.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.546 "adrfam": "ipv4", 00:20:00.546 "trsvcid": "$NVMF_PORT", 00:20:00.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.546 "hdgst": ${hdgst:-false}, 00:20:00.546 "ddgst": ${ddgst:-false} 00:20:00.546 }, 00:20:00.546 "method": "bdev_nvme_attach_controller" 00:20:00.546 } 00:20:00.546 EOF 00:20:00.546 )") 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:00.546 { 00:20:00.546 "params": { 00:20:00.546 "name": "Nvme$subsystem", 00:20:00.546 "trtype": "$TEST_TRANSPORT", 00:20:00.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.546 "adrfam": "ipv4", 00:20:00.546 "trsvcid": "$NVMF_PORT", 00:20:00.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.546 "hdgst": ${hdgst:-false}, 00:20:00.546 "ddgst": ${ddgst:-false} 00:20:00.546 }, 00:20:00.546 "method": "bdev_nvme_attach_controller" 00:20:00.546 } 00:20:00.546 EOF 00:20:00.546 )") 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:00.546 { 00:20:00.546 "params": { 00:20:00.546 "name": "Nvme$subsystem", 00:20:00.546 "trtype": "$TEST_TRANSPORT", 00:20:00.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.546 "adrfam": "ipv4", 00:20:00.546 "trsvcid": "$NVMF_PORT", 00:20:00.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.546 "hdgst": ${hdgst:-false}, 00:20:00.546 "ddgst": ${ddgst:-false} 00:20:00.546 }, 00:20:00.546 "method": "bdev_nvme_attach_controller" 00:20:00.546 } 00:20:00.546 EOF 00:20:00.546 )") 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:00.546 { 00:20:00.546 "params": { 00:20:00.546 "name": "Nvme$subsystem", 00:20:00.546 "trtype": "$TEST_TRANSPORT", 00:20:00.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.546 "adrfam": "ipv4", 00:20:00.546 "trsvcid": "$NVMF_PORT", 00:20:00.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.546 "hdgst": ${hdgst:-false}, 00:20:00.546 "ddgst": ${ddgst:-false} 00:20:00.546 }, 00:20:00.546 "method": "bdev_nvme_attach_controller" 00:20:00.546 } 00:20:00.546 EOF 00:20:00.546 )") 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:00.546 { 00:20:00.546 "params": { 00:20:00.546 "name": "Nvme$subsystem", 00:20:00.546 "trtype": "$TEST_TRANSPORT", 00:20:00.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.546 "adrfam": "ipv4", 00:20:00.546 "trsvcid": "$NVMF_PORT", 00:20:00.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.546 "hdgst": ${hdgst:-false}, 00:20:00.546 "ddgst": ${ddgst:-false} 00:20:00.546 }, 00:20:00.546 "method": "bdev_nvme_attach_controller" 00:20:00.546 } 00:20:00.546 EOF 00:20:00.546 )") 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:00.546 { 00:20:00.546 "params": { 00:20:00.546 "name": "Nvme$subsystem", 00:20:00.546 "trtype": "$TEST_TRANSPORT", 00:20:00.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.546 "adrfam": "ipv4", 00:20:00.546 "trsvcid": "$NVMF_PORT", 00:20:00.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.546 "hdgst": ${hdgst:-false}, 00:20:00.546 "ddgst": ${ddgst:-false} 00:20:00.546 }, 00:20:00.546 "method": "bdev_nvme_attach_controller" 00:20:00.546 } 00:20:00.546 EOF 00:20:00.546 )") 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:00.546 { 00:20:00.546 "params": { 00:20:00.546 "name": "Nvme$subsystem", 00:20:00.546 "trtype": "$TEST_TRANSPORT", 00:20:00.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.546 "adrfam": "ipv4", 00:20:00.546 "trsvcid": "$NVMF_PORT", 00:20:00.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.546 "hdgst": ${hdgst:-false}, 00:20:00.546 "ddgst": ${ddgst:-false} 00:20:00.546 }, 00:20:00.546 "method": "bdev_nvme_attach_controller" 00:20:00.546 } 00:20:00.546 EOF 00:20:00.546 )") 00:20:00.546 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:00.806 [2024-12-09 05:14:37.189850] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:20:00.806 [2024-12-09 05:14:37.189899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3636837 ] 00:20:00.806 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:00.806 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:00.806 { 00:20:00.806 "params": { 00:20:00.806 "name": "Nvme$subsystem", 00:20:00.806 "trtype": "$TEST_TRANSPORT", 00:20:00.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.806 "adrfam": "ipv4", 00:20:00.806 "trsvcid": "$NVMF_PORT", 00:20:00.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.806 "hdgst": ${hdgst:-false}, 00:20:00.806 "ddgst": ${ddgst:-false} 00:20:00.806 }, 00:20:00.806 "method": "bdev_nvme_attach_controller" 00:20:00.806 } 00:20:00.806 EOF 00:20:00.806 )") 00:20:00.806 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:00.806 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:00.806 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:00.806 { 00:20:00.806 "params": { 00:20:00.806 "name": "Nvme$subsystem", 00:20:00.806 "trtype": "$TEST_TRANSPORT", 00:20:00.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.806 "adrfam": "ipv4", 00:20:00.806 "trsvcid": "$NVMF_PORT", 00:20:00.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.806 "hdgst": ${hdgst:-false}, 00:20:00.806 "ddgst": ${ddgst:-false} 00:20:00.806 }, 00:20:00.806 "method": "bdev_nvme_attach_controller" 00:20:00.806 } 00:20:00.806 EOF 00:20:00.806 )") 00:20:00.806 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:00.806 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:00.806 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:00.806 { 00:20:00.806 "params": { 00:20:00.806 "name": "Nvme$subsystem", 00:20:00.806 "trtype": "$TEST_TRANSPORT", 00:20:00.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.806 "adrfam": "ipv4", 00:20:00.806 "trsvcid": "$NVMF_PORT", 00:20:00.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.806 "hdgst": ${hdgst:-false}, 00:20:00.806 "ddgst": ${ddgst:-false} 00:20:00.806 }, 00:20:00.806 "method": "bdev_nvme_attach_controller" 00:20:00.806 } 00:20:00.806 EOF 00:20:00.806 )") 00:20:00.806 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:00.806 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:00.806 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:00.806 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:00.806 "params": { 00:20:00.806 "name": "Nvme1", 00:20:00.806 "trtype": "tcp", 00:20:00.806 "traddr": "10.0.0.2", 00:20:00.806 "adrfam": "ipv4", 00:20:00.806 "trsvcid": "4420", 00:20:00.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.806 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.806 "hdgst": false, 00:20:00.806 "ddgst": false 00:20:00.806 }, 00:20:00.806 "method": "bdev_nvme_attach_controller" 00:20:00.806 },{ 00:20:00.806 "params": { 00:20:00.806 "name": "Nvme2", 00:20:00.806 "trtype": "tcp", 00:20:00.806 "traddr": "10.0.0.2", 00:20:00.806 "adrfam": "ipv4", 00:20:00.806 "trsvcid": "4420", 00:20:00.806 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:00.806 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:00.806 "hdgst": false, 00:20:00.806 "ddgst": false 00:20:00.806 }, 00:20:00.806 "method": "bdev_nvme_attach_controller" 00:20:00.806 },{ 00:20:00.806 "params": { 00:20:00.806 "name": "Nvme3", 00:20:00.806 "trtype": "tcp", 00:20:00.806 "traddr": "10.0.0.2", 00:20:00.806 "adrfam": "ipv4", 00:20:00.806 "trsvcid": "4420", 00:20:00.806 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:00.806 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:00.806 "hdgst": false, 00:20:00.806 "ddgst": false 00:20:00.806 }, 00:20:00.806 "method": "bdev_nvme_attach_controller" 00:20:00.806 },{ 00:20:00.806 "params": { 00:20:00.806 "name": "Nvme4", 00:20:00.806 "trtype": "tcp", 00:20:00.806 "traddr": "10.0.0.2", 00:20:00.806 "adrfam": "ipv4", 00:20:00.806 "trsvcid": "4420", 00:20:00.806 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:00.806 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:00.806 "hdgst": false, 00:20:00.806 "ddgst": false 00:20:00.806 }, 00:20:00.806 "method": "bdev_nvme_attach_controller" 00:20:00.806 },{ 00:20:00.806 "params": { 00:20:00.806 "name": "Nvme5", 00:20:00.806 "trtype": "tcp", 00:20:00.806 "traddr": "10.0.0.2", 00:20:00.806 "adrfam": "ipv4", 00:20:00.806 "trsvcid": "4420", 00:20:00.806 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:00.806 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:00.806 "hdgst": false, 00:20:00.806 "ddgst": false 00:20:00.806 }, 00:20:00.806 "method": "bdev_nvme_attach_controller" 00:20:00.806 },{ 00:20:00.806 "params": { 00:20:00.806 "name": "Nvme6", 00:20:00.806 "trtype": "tcp", 00:20:00.806 "traddr": "10.0.0.2", 00:20:00.806 "adrfam": "ipv4", 00:20:00.806 "trsvcid": "4420", 00:20:00.806 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:00.806 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:00.806 "hdgst": false, 00:20:00.806 "ddgst": false 00:20:00.806 }, 00:20:00.806 "method": "bdev_nvme_attach_controller" 00:20:00.806 },{ 00:20:00.806 "params": { 00:20:00.806 "name": "Nvme7", 00:20:00.806 "trtype": "tcp", 00:20:00.806 "traddr": "10.0.0.2", 00:20:00.806 "adrfam": "ipv4", 00:20:00.806 "trsvcid": "4420", 00:20:00.806 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:00.806 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:00.806 "hdgst": false, 00:20:00.806 "ddgst": false 00:20:00.806 }, 00:20:00.806 "method": "bdev_nvme_attach_controller" 00:20:00.806 },{ 00:20:00.806 "params": { 00:20:00.806 "name": "Nvme8", 00:20:00.806 "trtype": "tcp", 00:20:00.806 "traddr": "10.0.0.2", 00:20:00.806 "adrfam": "ipv4", 00:20:00.806 "trsvcid": "4420", 00:20:00.806 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:00.806 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:00.806 "hdgst": false, 00:20:00.806 "ddgst": false 00:20:00.806 }, 00:20:00.806 "method": "bdev_nvme_attach_controller" 00:20:00.806 },{ 00:20:00.806 "params": { 00:20:00.806 "name": "Nvme9", 00:20:00.806 "trtype": "tcp", 00:20:00.806 "traddr": "10.0.0.2", 00:20:00.806 "adrfam": "ipv4", 00:20:00.806 "trsvcid": "4420", 00:20:00.806 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:00.806 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:00.806 "hdgst": false, 00:20:00.806 "ddgst": false 00:20:00.806 }, 00:20:00.807 "method": "bdev_nvme_attach_controller" 00:20:00.807 },{ 00:20:00.807 "params": { 00:20:00.807 "name": "Nvme10", 00:20:00.807 "trtype": "tcp", 00:20:00.807 "traddr": "10.0.0.2", 00:20:00.807 "adrfam": "ipv4", 00:20:00.807 "trsvcid": "4420", 00:20:00.807 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:00.807 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:00.807 "hdgst": false, 00:20:00.807 "ddgst": false 00:20:00.807 }, 00:20:00.807 "method": "bdev_nvme_attach_controller" 00:20:00.807 }' 00:20:00.807 [2024-12-09 05:14:37.257638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.807 [2024-12-09 05:14:37.301577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.182 Running I/O for 10 seconds... 00:20:02.748 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.748 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:02.748 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:02.748 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.748 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:02.748 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.748 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:02.748 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:02.748 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:02.748 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:02.748 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:02.748 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:02.748 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:02.748 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:02.748 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:02.748 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.748 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:02.748 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.748 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=72 00:20:02.748 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 72 -ge 100 ']' 00:20:02.748 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=141 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 141 -ge 100 ']' 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3636837 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3636837 ']' 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3636837 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3636837 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3636837' 00:20:03.007 killing process with pid 3636837 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3636837 00:20:03.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3636837 00:20:03.007 Received shutdown signal, test time was about 0.891946 seconds 00:20:03.007 00:20:03.007 Latency(us) 00:20:03.007 [2024-12-09T04:14:39.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.007 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.007 Verification LBA range: start 0x0 length 0x400 00:20:03.007 Nvme1n1 : 0.88 300.22 18.76 0.00 0.00 210366.93 2236.77 221568.67 00:20:03.007 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.007 Verification LBA range: start 0x0 length 0x400 00:20:03.007 Nvme2n1 : 0.85 225.73 14.11 0.00 0.00 274933.61 17210.32 218833.25 00:20:03.007 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.007 Verification LBA range: start 0x0 length 0x400 00:20:03.007 Nvme3n1 : 0.88 292.25 18.27 0.00 0.00 208535.37 16184.54 221568.67 00:20:03.007 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.007 Verification LBA range: start 0x0 length 0x400 00:20:03.008 Nvme4n1 : 0.87 293.07 18.32 0.00 0.00 203908.45 17552.25 216097.84 00:20:03.008 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.008 Verification LBA range: start 0x0 length 0x400 00:20:03.008 Nvme5n1 : 0.89 287.30 17.96 0.00 0.00 204221.89 16982.37 220656.86 00:20:03.008 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.008 Verification LBA range: start 0x0 length 0x400 00:20:03.008 Nvme6n1 : 0.88 289.41 18.09 0.00 0.00 198707.87 16640.45 235245.75 00:20:03.008 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.008 Verification LBA range: start 0x0 length 0x400 00:20:03.008 Nvme7n1 : 0.86 302.04 18.88 0.00 0.00 184781.77 6952.51 198773.54 00:20:03.008 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.008 Verification LBA range: start 0x0 length 0x400 00:20:03.008 Nvme8n1 : 0.89 287.92 18.00 0.00 0.00 191868.22 18578.03 216097.84 00:20:03.008 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.008 Verification LBA range: start 0x0 length 0x400 00:20:03.008 Nvme9n1 : 0.86 222.93 13.93 0.00 0.00 241346.71 20971.52 235245.75 00:20:03.008 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:03.008 Verification LBA range: start 0x0 length 0x400 00:20:03.008 Nvme10n1 : 0.87 221.65 13.85 0.00 0.00 237725.90 17552.25 251658.24 00:20:03.008 [2024-12-09T04:14:39.654Z] =================================================================================================================== 00:20:03.008 [2024-12-09T04:14:39.654Z] Total : 2722.52 170.16 0.00 0.00 212677.74 2236.77 251658.24 00:20:03.266 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:04.256 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3636782 00:20:04.256 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:04.256 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:04.256 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:04.256 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:04.256 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:04.257 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:04.257 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:04.257 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:04.257 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:04.257 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:04.257 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:04.257 rmmod nvme_tcp 00:20:04.257 rmmod nvme_fabrics 00:20:04.257 rmmod nvme_keyring 00:20:04.257 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:04.257 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:04.257 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:04.257 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3636782 ']' 00:20:04.257 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3636782 00:20:04.257 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3636782 ']' 00:20:04.257 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3636782 00:20:04.257 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:04.257 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.257 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3636782 00:20:04.515 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:04.515 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:04.515 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3636782' 00:20:04.515 killing process with pid 3636782 00:20:04.515 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3636782 00:20:04.515 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3636782 00:20:04.774 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:04.774 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:04.774 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:04.774 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:04.774 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:04.774 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:04.774 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:04.774 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:04.774 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:04.774 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.774 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.774 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:07.310 00:20:07.310 real 0m7.425s 00:20:07.310 user 0m21.814s 00:20:07.310 sys 0m1.331s 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:07.310 ************************************ 00:20:07.310 END TEST nvmf_shutdown_tc2 00:20:07.310 ************************************ 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:07.310 ************************************ 00:20:07.310 START TEST nvmf_shutdown_tc3 00:20:07.310 ************************************ 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:07.310 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:07.311 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:07.311 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:07.311 Found net devices under 0000:86:00.0: cvl_0_0 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:07.311 Found net devices under 0000:86:00.1: cvl_0_1 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:07.311 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:07.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:20:07.311 00:20:07.311 --- 10.0.0.2 ping statistics --- 00:20:07.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.312 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:07.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:20:07.312 00:20:07.312 --- 10.0.0.1 ping statistics --- 00:20:07.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.312 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3638101 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3638101 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3638101 ']' 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.312 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:07.312 [2024-12-09 05:14:43.839308] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:20:07.312 [2024-12-09 05:14:43.839358] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.312 [2024-12-09 05:14:43.906945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:07.312 [2024-12-09 05:14:43.947574] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.312 [2024-12-09 05:14:43.947612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.312 [2024-12-09 05:14:43.947620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.312 [2024-12-09 05:14:43.947626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.312 [2024-12-09 05:14:43.947632] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.312 [2024-12-09 05:14:43.949335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.312 [2024-12-09 05:14:43.949402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:07.312 [2024-12-09 05:14:43.949491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.312 [2024-12-09 05:14:43.949491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:07.571 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.571 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:07.571 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:07.571 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:07.571 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:07.571 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.571 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:07.571 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.571 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:07.572 [2024-12-09 05:14:44.100390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.572 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:07.572 Malloc1 00:20:07.572 [2024-12-09 05:14:44.213115] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.831 Malloc2 00:20:07.831 Malloc3 00:20:07.831 Malloc4 00:20:07.831 Malloc5 00:20:07.831 Malloc6 00:20:07.831 Malloc7 00:20:08.091 Malloc8 00:20:08.091 Malloc9 00:20:08.091 Malloc10 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3638375 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3638375 /var/tmp/bdevperf.sock 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3638375 ']' 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.091 { 00:20:08.091 "params": { 00:20:08.091 "name": "Nvme$subsystem", 00:20:08.091 "trtype": "$TEST_TRANSPORT", 00:20:08.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.091 "adrfam": "ipv4", 00:20:08.091 "trsvcid": "$NVMF_PORT", 00:20:08.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.091 "hdgst": ${hdgst:-false}, 00:20:08.091 "ddgst": ${ddgst:-false} 00:20:08.091 }, 00:20:08.091 "method": "bdev_nvme_attach_controller" 00:20:08.091 } 00:20:08.091 EOF 00:20:08.091 )") 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.091 { 00:20:08.091 "params": { 00:20:08.091 "name": "Nvme$subsystem", 00:20:08.091 "trtype": "$TEST_TRANSPORT", 00:20:08.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.091 "adrfam": "ipv4", 00:20:08.091 "trsvcid": "$NVMF_PORT", 00:20:08.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.091 "hdgst": ${hdgst:-false}, 00:20:08.091 "ddgst": ${ddgst:-false} 00:20:08.091 }, 00:20:08.091 "method": "bdev_nvme_attach_controller" 00:20:08.091 } 00:20:08.091 EOF 00:20:08.091 )") 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.091 { 00:20:08.091 "params": { 00:20:08.091 "name": "Nvme$subsystem", 00:20:08.091 "trtype": "$TEST_TRANSPORT", 00:20:08.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.091 "adrfam": "ipv4", 00:20:08.091 "trsvcid": "$NVMF_PORT", 00:20:08.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.091 "hdgst": ${hdgst:-false}, 00:20:08.091 "ddgst": ${ddgst:-false} 00:20:08.091 }, 00:20:08.091 "method": "bdev_nvme_attach_controller" 00:20:08.091 } 00:20:08.091 EOF 00:20:08.091 )") 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.091 { 00:20:08.091 "params": { 00:20:08.091 "name": "Nvme$subsystem", 00:20:08.091 "trtype": "$TEST_TRANSPORT", 00:20:08.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.091 "adrfam": "ipv4", 00:20:08.091 "trsvcid": "$NVMF_PORT", 00:20:08.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.091 "hdgst": ${hdgst:-false}, 00:20:08.091 "ddgst": ${ddgst:-false} 00:20:08.091 }, 00:20:08.091 "method": "bdev_nvme_attach_controller" 00:20:08.091 } 00:20:08.091 EOF 00:20:08.091 )") 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.091 { 00:20:08.091 "params": { 00:20:08.091 "name": "Nvme$subsystem", 00:20:08.091 "trtype": "$TEST_TRANSPORT", 00:20:08.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.091 "adrfam": "ipv4", 00:20:08.091 "trsvcid": "$NVMF_PORT", 00:20:08.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.091 "hdgst": ${hdgst:-false}, 00:20:08.091 "ddgst": ${ddgst:-false} 00:20:08.091 }, 00:20:08.091 "method": "bdev_nvme_attach_controller" 00:20:08.091 } 00:20:08.091 EOF 00:20:08.091 )") 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.091 { 00:20:08.091 "params": { 00:20:08.091 "name": "Nvme$subsystem", 00:20:08.091 "trtype": "$TEST_TRANSPORT", 00:20:08.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.091 "adrfam": "ipv4", 00:20:08.091 "trsvcid": "$NVMF_PORT", 00:20:08.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.091 "hdgst": ${hdgst:-false}, 00:20:08.091 "ddgst": ${ddgst:-false} 00:20:08.091 }, 00:20:08.091 "method": "bdev_nvme_attach_controller" 00:20:08.091 } 00:20:08.091 EOF 00:20:08.091 )") 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.091 { 00:20:08.091 "params": { 00:20:08.091 "name": "Nvme$subsystem", 00:20:08.091 "trtype": "$TEST_TRANSPORT", 00:20:08.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.091 "adrfam": "ipv4", 00:20:08.091 "trsvcid": "$NVMF_PORT", 00:20:08.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.091 "hdgst": ${hdgst:-false}, 00:20:08.091 "ddgst": ${ddgst:-false} 00:20:08.091 }, 00:20:08.091 "method": "bdev_nvme_attach_controller" 00:20:08.091 } 00:20:08.091 EOF 00:20:08.091 )") 00:20:08.091 [2024-12-09 05:14:44.690456] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:20:08.091 [2024-12-09 05:14:44.690507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3638375 ] 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.091 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.091 { 00:20:08.091 "params": { 00:20:08.091 "name": "Nvme$subsystem", 00:20:08.091 "trtype": "$TEST_TRANSPORT", 00:20:08.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.092 "adrfam": "ipv4", 00:20:08.092 "trsvcid": "$NVMF_PORT", 00:20:08.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.092 "hdgst": ${hdgst:-false}, 00:20:08.092 "ddgst": ${ddgst:-false} 00:20:08.092 }, 00:20:08.092 "method": "bdev_nvme_attach_controller" 00:20:08.092 } 00:20:08.092 EOF 00:20:08.092 )") 00:20:08.092 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:08.092 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.092 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.092 { 00:20:08.092 "params": { 00:20:08.092 "name": "Nvme$subsystem", 00:20:08.092 "trtype": "$TEST_TRANSPORT", 00:20:08.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.092 "adrfam": "ipv4", 00:20:08.092 "trsvcid": "$NVMF_PORT", 00:20:08.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.092 "hdgst": ${hdgst:-false}, 00:20:08.092 "ddgst": ${ddgst:-false} 00:20:08.092 }, 00:20:08.092 "method": "bdev_nvme_attach_controller" 00:20:08.092 } 00:20:08.092 EOF 00:20:08.092 )") 00:20:08.092 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:08.092 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.092 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.092 { 00:20:08.092 "params": { 00:20:08.092 "name": "Nvme$subsystem", 00:20:08.092 "trtype": "$TEST_TRANSPORT", 00:20:08.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.092 "adrfam": "ipv4", 00:20:08.092 "trsvcid": "$NVMF_PORT", 00:20:08.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.092 "hdgst": ${hdgst:-false}, 00:20:08.092 "ddgst": ${ddgst:-false} 00:20:08.092 }, 00:20:08.092 "method": "bdev_nvme_attach_controller" 00:20:08.092 } 00:20:08.092 EOF 00:20:08.092 )") 00:20:08.092 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:08.092 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:20:08.092 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:20:08.092 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:08.092 "params": { 00:20:08.092 "name": "Nvme1", 00:20:08.092 "trtype": "tcp", 00:20:08.092 "traddr": "10.0.0.2", 00:20:08.092 "adrfam": "ipv4", 00:20:08.092 "trsvcid": "4420", 00:20:08.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:08.092 "hdgst": false, 00:20:08.092 "ddgst": false 00:20:08.092 }, 00:20:08.092 "method": "bdev_nvme_attach_controller" 00:20:08.092 },{ 00:20:08.092 "params": { 00:20:08.092 "name": "Nvme2", 00:20:08.092 "trtype": "tcp", 00:20:08.092 "traddr": "10.0.0.2", 00:20:08.092 "adrfam": "ipv4", 00:20:08.092 "trsvcid": "4420", 00:20:08.092 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:08.092 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:08.092 "hdgst": false, 00:20:08.092 "ddgst": false 00:20:08.092 }, 00:20:08.092 "method": "bdev_nvme_attach_controller" 00:20:08.092 },{ 00:20:08.092 "params": { 00:20:08.092 "name": "Nvme3", 00:20:08.092 "trtype": "tcp", 00:20:08.092 "traddr": "10.0.0.2", 00:20:08.092 "adrfam": "ipv4", 00:20:08.092 "trsvcid": "4420", 00:20:08.092 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:08.092 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:08.092 "hdgst": false, 00:20:08.092 "ddgst": false 00:20:08.092 }, 00:20:08.092 "method": "bdev_nvme_attach_controller" 00:20:08.092 },{ 00:20:08.092 "params": { 00:20:08.092 "name": "Nvme4", 00:20:08.092 "trtype": "tcp", 00:20:08.092 "traddr": "10.0.0.2", 00:20:08.092 "adrfam": "ipv4", 00:20:08.092 "trsvcid": "4420", 00:20:08.092 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:08.092 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:08.092 "hdgst": false, 00:20:08.092 "ddgst": false 00:20:08.092 }, 00:20:08.092 "method": "bdev_nvme_attach_controller" 00:20:08.092 },{ 00:20:08.092 "params": { 00:20:08.092 "name": "Nvme5", 00:20:08.092 "trtype": "tcp", 00:20:08.092 "traddr": "10.0.0.2", 00:20:08.092 "adrfam": "ipv4", 00:20:08.092 "trsvcid": "4420", 00:20:08.092 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:08.092 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:08.092 "hdgst": false, 00:20:08.092 "ddgst": false 00:20:08.092 }, 00:20:08.092 "method": "bdev_nvme_attach_controller" 00:20:08.092 },{ 00:20:08.092 "params": { 00:20:08.092 "name": "Nvme6", 00:20:08.092 "trtype": "tcp", 00:20:08.092 "traddr": "10.0.0.2", 00:20:08.092 "adrfam": "ipv4", 00:20:08.092 "trsvcid": "4420", 00:20:08.092 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:08.092 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:08.092 "hdgst": false, 00:20:08.092 "ddgst": false 00:20:08.092 }, 00:20:08.092 "method": "bdev_nvme_attach_controller" 00:20:08.092 },{ 00:20:08.092 "params": { 00:20:08.092 "name": "Nvme7", 00:20:08.092 "trtype": "tcp", 00:20:08.092 "traddr": "10.0.0.2", 00:20:08.092 "adrfam": "ipv4", 00:20:08.092 "trsvcid": "4420", 00:20:08.092 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:08.092 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:08.092 "hdgst": false, 00:20:08.092 "ddgst": false 00:20:08.092 }, 00:20:08.092 "method": "bdev_nvme_attach_controller" 00:20:08.092 },{ 00:20:08.092 "params": { 00:20:08.092 "name": "Nvme8", 00:20:08.092 "trtype": "tcp", 00:20:08.092 "traddr": "10.0.0.2", 00:20:08.092 "adrfam": "ipv4", 00:20:08.092 "trsvcid": "4420", 00:20:08.092 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:08.092 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:08.092 "hdgst": false, 00:20:08.092 "ddgst": false 00:20:08.092 }, 00:20:08.092 "method": "bdev_nvme_attach_controller" 00:20:08.092 },{ 00:20:08.092 "params": { 00:20:08.092 "name": "Nvme9", 00:20:08.092 "trtype": "tcp", 00:20:08.092 "traddr": "10.0.0.2", 00:20:08.092 "adrfam": "ipv4", 00:20:08.092 "trsvcid": "4420", 00:20:08.092 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:08.092 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:08.092 "hdgst": false, 00:20:08.092 "ddgst": false 00:20:08.092 }, 00:20:08.092 "method": "bdev_nvme_attach_controller" 00:20:08.092 },{ 00:20:08.092 "params": { 00:20:08.092 "name": "Nvme10", 00:20:08.092 "trtype": "tcp", 00:20:08.092 "traddr": "10.0.0.2", 00:20:08.092 "adrfam": "ipv4", 00:20:08.092 "trsvcid": "4420", 00:20:08.092 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:08.092 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:08.092 "hdgst": false, 00:20:08.092 "ddgst": false 00:20:08.092 }, 00:20:08.092 "method": "bdev_nvme_attach_controller" 00:20:08.092 }' 00:20:08.350 [2024-12-09 05:14:44.757590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.350 [2024-12-09 05:14:44.799016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.727 Running I/O for 10 seconds... 00:20:09.986 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.986 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:09.986 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:09.986 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.986 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:09.986 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.986 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:09.986 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:09.986 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:09.986 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:09.986 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:20:09.986 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:20:09.986 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:09.986 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:09.986 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:09.986 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:09.986 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.986 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:10.245 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.245 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:10.245 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:10.245 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:10.520 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:10.520 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:10.520 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:10.520 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:10.520 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.520 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:10.520 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.520 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:10.520 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:10.520 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:20:10.520 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:20:10.520 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:20:10.520 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3638101 00:20:10.520 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3638101 ']' 00:20:10.520 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3638101 00:20:10.520 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:20:10.520 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.520 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3638101 00:20:10.520 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:10.520 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:10.520 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3638101' 00:20:10.520 killing process with pid 3638101 00:20:10.520 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3638101 00:20:10.520 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3638101 00:20:10.520 [2024-12-09 05:14:47.009440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.520 [2024-12-09 05:14:47.009522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.520 [2024-12-09 05:14:47.009530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.520 [2024-12-09 05:14:47.009537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.520 [2024-12-09 05:14:47.009544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.520 [2024-12-09 05:14:47.009551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.520 [2024-12-09 05:14:47.009558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.520 [2024-12-09 05:14:47.009564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.520 [2024-12-09 05:14:47.009570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.520 [2024-12-09 05:14:47.009577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.520 [2024-12-09 05:14:47.009583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.520 [2024-12-09 05:14:47.009589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.520 [2024-12-09 05:14:47.009596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.520 [2024-12-09 05:14:47.009602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.009923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce30 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.521 [2024-12-09 05:14:47.011304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.011480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa94d20 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.012996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.522 [2024-12-09 05:14:47.013221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.013227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.013234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.013241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.013249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.013255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa951f0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.014676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa956e0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.015260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.015288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.015297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.015304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.015310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.015317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.015324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.015330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.015336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.015342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.015349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.015356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.015362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.015368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.015374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.015380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.523 [2024-12-09 05:14:47.015386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.015687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95bb0 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.524 [2024-12-09 05:14:47.016708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.016879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96080 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.017306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.525 [2024-12-09 05:14:47.017338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.525 [2024-12-09 05:14:47.017348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.525 [2024-12-09 05:14:47.017355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.525 [2024-12-09 05:14:47.017363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.525 [2024-12-09 05:14:47.017370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.525 [2024-12-09 05:14:47.017378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.525 [2024-12-09 05:14:47.017385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.525 [2024-12-09 05:14:47.017392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e81c0 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.017421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.525 [2024-12-09 05:14:47.017431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.525 [2024-12-09 05:14:47.017439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.525 [2024-12-09 05:14:47.017446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.525 [2024-12-09 05:14:47.017453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.525 [2024-12-09 05:14:47.017460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.525 [2024-12-09 05:14:47.017467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.525 [2024-12-09 05:14:47.017474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.525 [2024-12-09 05:14:47.017481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c61410 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.017508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.525 [2024-12-09 05:14:47.017517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.525 [2024-12-09 05:14:47.017524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.525 [2024-12-09 05:14:47.017531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.525 [2024-12-09 05:14:47.017539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.525 [2024-12-09 05:14:47.017546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.525 [2024-12-09 05:14:47.017553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.525 [2024-12-09 05:14:47.017560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.525 [2024-12-09 05:14:47.017570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e7d30 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.017596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.525 [2024-12-09 05:14:47.017605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.525 [2024-12-09 05:14:47.017614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.525 [2024-12-09 05:14:47.017620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.525 [2024-12-09 05:14:47.017628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.525 [2024-12-09 05:14:47.017635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.525 [2024-12-09 05:14:47.017650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.525 [2024-12-09 05:14:47.017657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.525 [2024-12-09 05:14:47.017663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c12640 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.017735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.525 [2024-12-09 05:14:47.017746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.525 [2024-12-09 05:14:47.017753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.525 [2024-12-09 05:14:47.017760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.525 [2024-12-09 05:14:47.017767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.525 [2024-12-09 05:14:47.017774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.525 [2024-12-09 05:14:47.017782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.525 [2024-12-09 05:14:47.017788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.525 [2024-12-09 05:14:47.017794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0c150 is same with the state(6) to be set 00:20:10.525 [2024-12-09 05:14:47.017818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.526 [2024-12-09 05:14:47.017826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.017834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.526 [2024-12-09 05:14:47.017841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.017849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.526 [2024-12-09 05:14:47.017855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.017867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.526 [2024-12-09 05:14:47.017874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.017881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c13550 is same with the state(6) to be set 00:20:10.526 [2024-12-09 05:14:47.017906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.526 [2024-12-09 05:14:47.017915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.017922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.526 [2024-12-09 05:14:47.017929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.017936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.526 [2024-12-09 05:14:47.017943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.017950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.526 [2024-12-09 05:14:47.017957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.017964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dc200 is same with the state(6) to be set 00:20:10.526 [2024-12-09 05:14:47.018301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.526 [2024-12-09 05:14:47.018679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.526 [2024-12-09 05:14:47.018686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.526 [2024-12-09 05:14:47.018695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.526 [2024-12-09 05:14:47.018703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.526 [2024-12-09 05:14:47.018712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.526 [2024-12-09 05:14:47.018720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.526 [2024-12-09 05:14:47.018731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.526 [2024-12-09 05:14:47.018731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.526 [2024-12-09 05:14:47.018738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.526 [2024-12-09 05:14:47.018739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.526 [2024-12-09 05:14:47.018745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.526 [2024-12-09 05:14:47.018748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.527 [2024-12-09 05:14:47.018754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with t[2024-12-09 05:14:47.018756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:20:10.527 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.527 [2024-12-09 05:14:47.018767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.527 [2024-12-09 05:14:47.018774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.527 [2024-12-09 05:14:47.018782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.527 [2024-12-09 05:14:47.018789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.527 [2024-12-09 05:14:47.018797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.527 [2024-12-09 05:14:47.018805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.527 [2024-12-09 05:14:47.018812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.527 [2024-12-09 05:14:47.018826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.527 [2024-12-09 05:14:47.018833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.527 [2024-12-09 05:14:47.018840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.527 [2024-12-09 05:14:47.018851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.527 [2024-12-09 05:14:47.018858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.527 [2024-12-09 05:14:47.018865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.527 [2024-12-09 05:14:47.018879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.527 [2024-12-09 05:14:47.018887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.527 [2024-12-09 05:14:47.018894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.527 [2024-12-09 05:14:47.018902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with t[2024-12-09 05:14:47.018909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:12he state(6) to be set 00:20:10.527 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.527 [2024-12-09 05:14:47.018917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.527 [2024-12-09 05:14:47.018924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.527 [2024-12-09 05:14:47.018932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.527 [2024-12-09 05:14:47.018940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.527 [2024-12-09 05:14:47.018947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.527 [2024-12-09 05:14:47.018954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with t[2024-12-09 05:14:47.018961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:12he state(6) to be set 00:20:10.527 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.527 [2024-12-09 05:14:47.018969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.527 [2024-12-09 05:14:47.018977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.527 [2024-12-09 05:14:47.018984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.018989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.527 [2024-12-09 05:14:47.018992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.019004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:12[2024-12-09 05:14:47.019005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.527 he state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.019014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with t[2024-12-09 05:14:47.019014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:20:10.527 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.527 [2024-12-09 05:14:47.019023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.019028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.527 [2024-12-09 05:14:47.019030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.019036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.527 [2024-12-09 05:14:47.019038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.019045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.019046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.527 [2024-12-09 05:14:47.019052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.019054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.527 [2024-12-09 05:14:47.019058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.019063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.527 [2024-12-09 05:14:47.019065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.019071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.527 [2024-12-09 05:14:47.019073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.527 [2024-12-09 05:14:47.019080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with t[2024-12-09 05:14:47.019081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:12he state(6) to be set 00:20:10.527 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.527 [2024-12-09 05:14:47.019089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with t[2024-12-09 05:14:47.019090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:20:10.527 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.527 [2024-12-09 05:14:47.019100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.528 [2024-12-09 05:14:47.019101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.019107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.528 [2024-12-09 05:14:47.019110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.019115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.528 [2024-12-09 05:14:47.019119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.019123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.528 [2024-12-09 05:14:47.019127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.019131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.528 [2024-12-09 05:14:47.019136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.019138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.528 [2024-12-09 05:14:47.019144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 05:14:47.019144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 he state(6) to be set 00:20:10.528 [2024-12-09 05:14:47.019153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.528 [2024-12-09 05:14:47.019154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.019160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96550 is same with the state(6) to be set 00:20:10.528 [2024-12-09 05:14:47.019162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.019172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.019179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.019187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.019194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.019202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.019209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.019217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.019224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.019233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.019240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.019249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.019256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.019264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.019271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.019279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.019285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.019295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.019301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.019310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.019316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.019324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.019331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.019339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.019346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.019354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.019361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.019369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.019376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.019384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.019391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.019417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:10.528 [2024-12-09 05:14:47.020232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.020255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.020276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.020284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.020293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.020300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.020308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.020316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.020325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.020331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.020340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.020346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.020356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.020362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.020371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.020377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.020385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.020394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.020403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.020410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.020418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.020426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.020434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.020443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.020452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.020459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.020468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.020477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.020486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.020494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.020502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.528 [2024-12-09 05:14:47.020509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.528 [2024-12-09 05:14:47.020518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.529 [2024-12-09 05:14:47.020525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.529 [2024-12-09 05:14:47.020533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.529 [2024-12-09 05:14:47.020540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.529 [2024-12-09 05:14:47.020549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.529 [2024-12-09 05:14:47.020555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.529 [2024-12-09 05:14:47.020553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.529 [2024-12-09 05:14:47.020569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.529 [2024-12-09 05:14:47.020579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.529 [2024-12-09 05:14:47.020587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.529 [2024-12-09 05:14:47.020594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.529 [2024-12-09 05:14:47.020601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.529 [2024-12-09 05:14:47.020610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.529 [2024-12-09 05:14:47.020626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.529 [2024-12-09 05:14:47.020634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.529 [2024-12-09 05:14:47.020640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.529 [2024-12-09 05:14:47.020647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.529 [2024-12-09 05:14:47.020661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.529 [2024-12-09 05:14:47.020669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.529 [2024-12-09 05:14:47.020677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.529 [2024-12-09 05:14:47.020683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with t[2024-12-09 05:14:47.020691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:1he state(6) to be set 00:20:10.529 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.529 [2024-12-09 05:14:47.020699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.529 [2024-12-09 05:14:47.020707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.529 [2024-12-09 05:14:47.020714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.529 [2024-12-09 05:14:47.020721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with t[2024-12-09 05:14:47.020728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:1he state(6) to be set 00:20:10.529 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.529 [2024-12-09 05:14:47.020738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.529 [2024-12-09 05:14:47.020746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.529 [2024-12-09 05:14:47.020753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.529 [2024-12-09 05:14:47.020760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with t[2024-12-09 05:14:47.020767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1he state(6) to be set 00:20:10.529 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.529 [2024-12-09 05:14:47.020775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.529 [2024-12-09 05:14:47.020783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.529 [2024-12-09 05:14:47.020791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.529 [2024-12-09 05:14:47.020798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.529 [2024-12-09 05:14:47.020835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.529 [2024-12-09 05:14:47.020920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.020965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.529 [2024-12-09 05:14:47.021017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.021062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.529 [2024-12-09 05:14:47.021110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.529 [2024-12-09 05:14:47.021156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.529 [2024-12-09 05:14:47.021198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.021242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.021286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.021332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.021379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.021422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.021467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.021516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.021559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.021603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.021647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.021692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.021734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.021779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.021823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.021868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.021909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.021954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.022000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.022050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.022092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.022139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.022183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.022228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.022271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.022316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.022358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.022407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.022450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.022495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.022543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.022590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.022634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.022683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.022729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.022774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.022819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.022863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.022908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.022952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.022997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.023047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.023091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.023136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0c960 is same with the state(6) to be set 00:20:10.530 [2024-12-09 05:14:47.036416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.036440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.036453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.036464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.036475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.036485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.036497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.036507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.036522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.036531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.036543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.036555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.036567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.036576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.036587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.036597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.036609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.036619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.036631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.036641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.036653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.036663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.036675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.036684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.036696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.036706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.036717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.036727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.036739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.036748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.036760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.036769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.036780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.036794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.036806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.036816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.530 [2024-12-09 05:14:47.036828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.530 [2024-12-09 05:14:47.036836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.036848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.531 [2024-12-09 05:14:47.036858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.036869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.531 [2024-12-09 05:14:47.036878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.036915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:10.531 [2024-12-09 05:14:47.037178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e81c0 (9): Bad file descriptor 00:20:10.531 [2024-12-09 05:14:47.037208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c61410 (9): Bad file descriptor 00:20:10.531 [2024-12-09 05:14:47.037224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e7d30 (9): Bad file descriptor 00:20:10.531 [2024-12-09 05:14:47.037245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c12640 (9): Bad file descriptor 00:20:10.531 [2024-12-09 05:14:47.037287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.531 [2024-12-09 05:14:47.037301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.037312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.531 [2024-12-09 05:14:47.037322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.037334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.531 [2024-12-09 05:14:47.037343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.037353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.531 [2024-12-09 05:14:47.037363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.037372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48820 is same with the state(6) to be set 00:20:10.531 [2024-12-09 05:14:47.037398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.531 [2024-12-09 05:14:47.037410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.037425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.531 [2024-12-09 05:14:47.037435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.037445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.531 [2024-12-09 05:14:47.037455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.037466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.531 [2024-12-09 05:14:47.037475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.037484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60080 is same with the state(6) to be set 00:20:10.531 [2024-12-09 05:14:47.037517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.531 [2024-12-09 05:14:47.037530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.037540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.531 [2024-12-09 05:14:47.037550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.037560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.531 [2024-12-09 05:14:47.037569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.037579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.531 [2024-12-09 05:14:47.037588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.037597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16fc610 is same with the state(6) to be set 00:20:10.531 [2024-12-09 05:14:47.037619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0c150 (9): Bad file descriptor 00:20:10.531 [2024-12-09 05:14:47.037639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c13550 (9): Bad file descriptor 00:20:10.531 [2024-12-09 05:14:47.037661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dc200 (9): Bad file descriptor 00:20:10.531 [2024-12-09 05:14:47.040615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:10.531 [2024-12-09 05:14:47.041133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:10.531 [2024-12-09 05:14:47.041168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c48820 (9): Bad file descriptor 00:20:10.531 [2024-12-09 05:14:47.041311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.531 [2024-12-09 05:14:47.041331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e81c0 with addr=10.0.0.2, port=4420 00:20:10.531 [2024-12-09 05:14:47.041342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e81c0 is same with the state(6) to be set 00:20:10.531 [2024-12-09 05:14:47.042361] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:10.531 [2024-12-09 05:14:47.042410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e81c0 (9): Bad file descriptor 00:20:10.531 [2024-12-09 05:14:47.042498] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:10.531 [2024-12-09 05:14:47.042561] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:10.531 [2024-12-09 05:14:47.042613] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:10.531 [2024-12-09 05:14:47.042668] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:10.531 [2024-12-09 05:14:47.042717] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:10.531 [2024-12-09 05:14:47.042768] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:10.531 [2024-12-09 05:14:47.042833] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:10.531 [2024-12-09 05:14:47.043103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.531 [2024-12-09 05:14:47.043124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c48820 with addr=10.0.0.2, port=4420 00:20:10.531 [2024-12-09 05:14:47.043137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48820 is same with the state(6) to be set 00:20:10.531 [2024-12-09 05:14:47.043150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:10.531 [2024-12-09 05:14:47.043161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:10.531 [2024-12-09 05:14:47.043173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:10.531 [2024-12-09 05:14:47.043185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:10.531 [2024-12-09 05:14:47.043300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c48820 (9): Bad file descriptor 00:20:10.531 [2024-12-09 05:14:47.043358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:10.531 [2024-12-09 05:14:47.043369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:10.531 [2024-12-09 05:14:47.043380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:10.531 [2024-12-09 05:14:47.043389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:10.531 [2024-12-09 05:14:47.047183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60080 (9): Bad file descriptor 00:20:10.531 [2024-12-09 05:14:47.047212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16fc610 (9): Bad file descriptor 00:20:10.531 [2024-12-09 05:14:47.047367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.531 [2024-12-09 05:14:47.047382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.047399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.531 [2024-12-09 05:14:47.047408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.047421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.531 [2024-12-09 05:14:47.047430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.047441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.531 [2024-12-09 05:14:47.047450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.047461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.531 [2024-12-09 05:14:47.047475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.047487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.531 [2024-12-09 05:14:47.047495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.047506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.531 [2024-12-09 05:14:47.047514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.047526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.531 [2024-12-09 05:14:47.047534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.531 [2024-12-09 05:14:47.047545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.047986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.047995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.048012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.048020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.048031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.048039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.048050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.048058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.048068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.048077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.048088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.048097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.048107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.048116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.048126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.048135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.048145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.048154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.048164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.048172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.048183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.048191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.048202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.048210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.048223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.048231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.048242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.048250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.048260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.048269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.048278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.048288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.048298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.048307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.048317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.532 [2024-12-09 05:14:47.048326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.532 [2024-12-09 05:14:47.048335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.048344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.048355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.048363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.048373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.048382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.048393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.048401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.048412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.048420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.048431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.048439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.048450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.048461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.048472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.048480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.048490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.048499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.048509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.048517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.048527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.048536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.048546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.048555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.048565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.048574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.048585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.048596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.048607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.048615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.048626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.048634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.048644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ed1b0 is same with the state(6) to be set 00:20:10.533 [2024-12-09 05:14:47.049857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.049872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.049885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.049894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.049905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.049917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.049928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.049936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.049947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.049955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.049965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.049974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.049987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.049996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.050014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.050023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.050034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.050042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.050052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.050061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.050072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.050080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.050091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.050099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.050109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.050118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.050129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.050138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.050148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.050156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.050169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.050177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.050188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.050196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.050206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.050214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.050224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.050232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.533 [2024-12-09 05:14:47.050243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.533 [2024-12-09 05:14:47.050251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.050985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.050995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.534 [2024-12-09 05:14:47.051008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.534 [2024-12-09 05:14:47.051018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.051026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.051038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.051046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.051057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.051065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.051075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.051083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.051093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.051101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.051110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbf560 is same with the state(6) to be set 00:20:10.535 [2024-12-09 05:14:47.052322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.052978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.052987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.053003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.053012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.535 [2024-12-09 05:14:47.053022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.535 [2024-12-09 05:14:47.053031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.053627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.053637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bec4e0 is same with the state(6) to be set 00:20:10.536 [2024-12-09 05:14:47.054835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.054853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.054867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.054876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.054888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.054896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.054906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.054918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.054931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.054943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.054954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.054966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.054977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.054987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.055005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.055015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.055030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.055039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.536 [2024-12-09 05:14:47.055051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.536 [2024-12-09 05:14:47.055059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.537 [2024-12-09 05:14:47.055838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.537 [2024-12-09 05:14:47.055845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.055853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.055860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.055872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.055880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.055890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.055898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.055907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.055915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.055925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.055933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.055942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.055951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.055962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.055969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.055978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.055985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.055995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.056008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.056017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.056025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.056036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.056044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.056053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.056063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.056072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.056081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.056089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.056097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.056107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bed710 is same with the state(6) to be set 00:20:10.538 [2024-12-09 05:14:47.057108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.057123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.057135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.057143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.057152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.057159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.057168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.057177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.057187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.057195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.057206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.057215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.057223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.057231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.057240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.057247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.057257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.057265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.057281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.057292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.057301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.057309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.057317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.057324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.057333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.057341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.057352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.057361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.057371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.057379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.057388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.057395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.057404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.057411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.057421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.057430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.057440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.057447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.057456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.057463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.057471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.538 [2024-12-09 05:14:47.057478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.538 [2024-12-09 05:14:47.057488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.057983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.057991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.058004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.058014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.058023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.058032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.058041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.058050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.058057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.058066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.058073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.058081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.058091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.058101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.058110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.058121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.058129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.058138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.058145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.058155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.058162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.539 [2024-12-09 05:14:47.058174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.539 [2024-12-09 05:14:47.058182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.058191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.058199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.058207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.058213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.058223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.058232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.058242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bee9d0 is same with the state(6) to be set 00:20:10.540 [2024-12-09 05:14:47.059280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.540 [2024-12-09 05:14:47.059923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.540 [2024-12-09 05:14:47.059930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.059938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.059946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.059954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.059961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.059970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.059980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.059989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.060385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.060394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.063852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.063871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.063880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.063888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2cee0 is same with the state(6) to be set 00:20:10.541 [2024-12-09 05:14:47.064872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:10.541 [2024-12-09 05:14:47.064893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:10.541 [2024-12-09 05:14:47.064906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:10.541 [2024-12-09 05:14:47.064922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:10.541 [2024-12-09 05:14:47.064997] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:20:10.541 [2024-12-09 05:14:47.065032] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:20:10.541 [2024-12-09 05:14:47.065109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:10.541 [2024-12-09 05:14:47.065123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:10.541 [2024-12-09 05:14:47.065362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.541 [2024-12-09 05:14:47.065379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dc200 with addr=10.0.0.2, port=4420 00:20:10.541 [2024-12-09 05:14:47.065389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dc200 is same with the state(6) to be set 00:20:10.541 [2024-12-09 05:14:47.065505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.541 [2024-12-09 05:14:47.065518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e7d30 with addr=10.0.0.2, port=4420 00:20:10.541 [2024-12-09 05:14:47.065528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e7d30 is same with the state(6) to be set 00:20:10.541 [2024-12-09 05:14:47.065715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.541 [2024-12-09 05:14:47.065729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c12640 with addr=10.0.0.2, port=4420 00:20:10.541 [2024-12-09 05:14:47.065737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c12640 is same with the state(6) to be set 00:20:10.541 [2024-12-09 05:14:47.065818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.541 [2024-12-09 05:14:47.065831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0c150 with addr=10.0.0.2, port=4420 00:20:10.541 [2024-12-09 05:14:47.065841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0c150 is same with the state(6) to be set 00:20:10.541 [2024-12-09 05:14:47.067007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.067026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.067040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.541 [2024-12-09 05:14:47.067048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.541 [2024-12-09 05:14:47.067058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.542 [2024-12-09 05:14:47.067700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.542 [2024-12-09 05:14:47.067710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.067717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.067726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.067735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.067745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.067753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.067763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.067770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.067779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.067786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.067795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.067802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.067810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.067817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.067826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.067835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.067846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.067855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.067863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.067873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.067882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.067889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.067897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.067906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.067917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.067926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.067936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.067944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.067952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.067960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.067968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.067976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.067986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.067994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.068009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.068017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.068026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.068033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.068043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.068052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.068063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.068071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.068081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.068088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.068099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.068105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.068115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.068123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.068135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.068143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.068153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.068160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.068167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1befc90 is same with the state(6) to be set 00:20:10.543 [2024-12-09 05:14:47.069185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.069201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.069214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.069222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.069232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.069239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.069248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.069256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.069266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.069275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.069286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.069293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.069302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.069309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.069318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.069326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.069335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.069347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.069358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.069366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.069377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.069385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.069395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.069407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.069418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.543 [2024-12-09 05:14:47.069426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.543 [2024-12-09 05:14:47.069437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.069987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.069995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.070013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.070021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.070034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.070043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.070053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.070061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.070070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.070077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.070087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.070094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.070105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.070112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.070121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.070127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.070137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.544 [2024-12-09 05:14:47.070145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.544 [2024-12-09 05:14:47.070154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.545 [2024-12-09 05:14:47.070161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.545 [2024-12-09 05:14:47.070169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.545 [2024-12-09 05:14:47.070179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.545 [2024-12-09 05:14:47.070190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.545 [2024-12-09 05:14:47.070197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.545 [2024-12-09 05:14:47.070206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.545 [2024-12-09 05:14:47.070214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.545 [2024-12-09 05:14:47.070222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.545 [2024-12-09 05:14:47.070231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.545 [2024-12-09 05:14:47.070240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.545 [2024-12-09 05:14:47.070249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.545 [2024-12-09 05:14:47.070258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.545 [2024-12-09 05:14:47.070266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.545 [2024-12-09 05:14:47.070274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.545 [2024-12-09 05:14:47.070281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.545 [2024-12-09 05:14:47.070291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.545 [2024-12-09 05:14:47.070300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.545 [2024-12-09 05:14:47.070310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.545 [2024-12-09 05:14:47.070318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.545 [2024-12-09 05:14:47.070326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.545 [2024-12-09 05:14:47.070334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.545 [2024-12-09 05:14:47.070343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2bc50 is same with the state(6) to be set 00:20:10.545 [2024-12-09 05:14:47.071555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:10.545 [2024-12-09 05:14:47.071578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:10.545 [2024-12-09 05:14:47.071592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:10.545 task offset: 30464 on job bdev=Nvme1n1 fails 00:20:10.545 00:20:10.545 Latency(us) 00:20:10.545 [2024-12-09T04:14:47.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.545 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.545 Job: Nvme1n1 ended in about 0.79 seconds with error 00:20:10.545 Verification LBA range: start 0x0 length 0x400 00:20:10.545 Nvme1n1 : 0.79 242.20 15.14 80.73 0.00 195871.61 15728.64 218833.25 00:20:10.545 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.545 Job: Nvme2n1 ended in about 0.80 seconds with error 00:20:10.545 Verification LBA range: start 0x0 length 0x400 00:20:10.545 Nvme2n1 : 0.80 159.28 9.95 79.64 0.00 259581.03 17666.23 216097.84 00:20:10.545 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.545 Job: Nvme3n1 ended in about 0.81 seconds with error 00:20:10.545 Verification LBA range: start 0x0 length 0x400 00:20:10.545 Nvme3n1 : 0.81 244.39 15.27 79.40 0.00 187565.71 24960.67 208803.39 00:20:10.545 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.545 Job: Nvme4n1 ended in about 0.81 seconds with error 00:20:10.545 Verification LBA range: start 0x0 length 0x400 00:20:10.545 Nvme4n1 : 0.81 237.45 14.84 79.15 0.00 187914.69 24162.84 207891.59 00:20:10.545 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.545 Job: Nvme5n1 ended in about 0.81 seconds with error 00:20:10.545 Verification LBA range: start 0x0 length 0x400 00:20:10.545 Nvme5n1 : 0.81 157.83 9.86 78.91 0.00 246128.94 16640.45 226127.69 00:20:10.545 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.545 Job: Nvme6n1 ended in about 0.81 seconds with error 00:20:10.545 Verification LBA range: start 0x0 length 0x400 00:20:10.545 Nvme6n1 : 0.81 157.41 9.84 78.71 0.00 241590.69 23137.06 249834.63 00:20:10.545 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.545 Job: Nvme7n1 ended in about 0.82 seconds with error 00:20:10.545 Verification LBA range: start 0x0 length 0x400 00:20:10.545 Nvme7n1 : 0.82 155.52 9.72 77.76 0.00 239614.52 14930.81 242540.19 00:20:10.545 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.545 Job: Nvme8n1 ended in about 0.79 seconds with error 00:20:10.545 Verification LBA range: start 0x0 length 0x400 00:20:10.545 Nvme8n1 : 0.79 241.73 15.11 80.58 0.00 168490.74 19831.76 195126.32 00:20:10.545 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.545 Job: Nvme9n1 ended in about 0.83 seconds with error 00:20:10.545 Verification LBA range: start 0x0 length 0x400 00:20:10.545 Nvme9n1 : 0.83 155.11 9.69 77.56 0.00 229743.45 35332.45 227951.30 00:20:10.545 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.545 Job: Nvme10n1 ended in about 0.82 seconds with error 00:20:10.545 Verification LBA range: start 0x0 length 0x400 00:20:10.545 Nvme10n1 : 0.82 156.33 9.77 78.16 0.00 222376.59 18919.96 246187.41 00:20:10.545 [2024-12-09T04:14:47.191Z] =================================================================================================================== 00:20:10.545 [2024-12-09T04:14:47.191Z] Total : 1907.25 119.20 790.59 0.00 213953.39 14930.81 249834.63 00:20:10.545 [2024-12-09 05:14:47.102202] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:10.545 [2024-12-09 05:14:47.102253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:10.545 [2024-12-09 05:14:47.102579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.545 [2024-12-09 05:14:47.102599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c13550 with addr=10.0.0.2, port=4420 00:20:10.545 [2024-12-09 05:14:47.102611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c13550 is same with the state(6) to be set 00:20:10.545 [2024-12-09 05:14:47.102725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.545 [2024-12-09 05:14:47.102738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c61410 with addr=10.0.0.2, port=4420 00:20:10.545 [2024-12-09 05:14:47.102747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c61410 is same with the state(6) to be set 00:20:10.545 [2024-12-09 05:14:47.102760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dc200 (9): Bad file descriptor 00:20:10.545 [2024-12-09 05:14:47.102774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e7d30 (9): Bad file descriptor 00:20:10.545 [2024-12-09 05:14:47.102783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c12640 (9): Bad file descriptor 00:20:10.545 [2024-12-09 05:14:47.102795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0c150 (9): Bad file descriptor 00:20:10.545 [2024-12-09 05:14:47.103138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.545 [2024-12-09 05:14:47.103158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e81c0 with addr=10.0.0.2, port=4420 00:20:10.545 [2024-12-09 05:14:47.103168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e81c0 is same with the state(6) to be set 00:20:10.545 [2024-12-09 05:14:47.103285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.545 [2024-12-09 05:14:47.103298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c48820 with addr=10.0.0.2, port=4420 00:20:10.545 [2024-12-09 05:14:47.103308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48820 is same with the state(6) to be set 00:20:10.545 [2024-12-09 05:14:47.103480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.545 [2024-12-09 05:14:47.103493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16fc610 with addr=10.0.0.2, port=4420 00:20:10.545 [2024-12-09 05:14:47.103501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16fc610 is same with the state(6) to be set 00:20:10.545 [2024-12-09 05:14:47.103614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.545 [2024-12-09 05:14:47.103626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60080 with addr=10.0.0.2, port=4420 00:20:10.545 [2024-12-09 05:14:47.103634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60080 is same with the state(6) to be set 00:20:10.545 [2024-12-09 05:14:47.103644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c13550 (9): Bad file descriptor 00:20:10.545 [2024-12-09 05:14:47.103654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c61410 (9): Bad file descriptor 00:20:10.545 [2024-12-09 05:14:47.103663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:10.545 [2024-12-09 05:14:47.103671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:10.545 [2024-12-09 05:14:47.103680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:10.545 [2024-12-09 05:14:47.103690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:10.545 [2024-12-09 05:14:47.103700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:10.545 [2024-12-09 05:14:47.103708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:10.546 [2024-12-09 05:14:47.103715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:10.546 [2024-12-09 05:14:47.103722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:10.546 [2024-12-09 05:14:47.103730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:10.546 [2024-12-09 05:14:47.103737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:10.546 [2024-12-09 05:14:47.103743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:10.546 [2024-12-09 05:14:47.103749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:10.546 [2024-12-09 05:14:47.103757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:10.546 [2024-12-09 05:14:47.103765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:10.546 [2024-12-09 05:14:47.103773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:10.546 [2024-12-09 05:14:47.103780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:10.546 [2024-12-09 05:14:47.103819] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:20:10.546 [2024-12-09 05:14:47.103832] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:20:10.546 [2024-12-09 05:14:47.104433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e81c0 (9): Bad file descriptor 00:20:10.546 [2024-12-09 05:14:47.104453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c48820 (9): Bad file descriptor 00:20:10.546 [2024-12-09 05:14:47.104468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16fc610 (9): Bad file descriptor 00:20:10.546 [2024-12-09 05:14:47.104476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60080 (9): Bad file descriptor 00:20:10.546 [2024-12-09 05:14:47.104485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:10.546 [2024-12-09 05:14:47.104494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:10.546 [2024-12-09 05:14:47.104502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:10.546 [2024-12-09 05:14:47.104509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:10.546 [2024-12-09 05:14:47.104517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:10.546 [2024-12-09 05:14:47.104525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:10.546 [2024-12-09 05:14:47.104531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:10.546 [2024-12-09 05:14:47.104538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:10.546 [2024-12-09 05:14:47.104578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:10.546 [2024-12-09 05:14:47.104591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:10.546 [2024-12-09 05:14:47.104600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:10.546 [2024-12-09 05:14:47.104609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:10.546 [2024-12-09 05:14:47.104638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:10.546 [2024-12-09 05:14:47.104649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:10.546 [2024-12-09 05:14:47.104657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:10.546 [2024-12-09 05:14:47.104664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:10.546 [2024-12-09 05:14:47.104671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:10.546 [2024-12-09 05:14:47.104677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:10.546 [2024-12-09 05:14:47.104684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:10.546 [2024-12-09 05:14:47.104692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:10.546 [2024-12-09 05:14:47.104700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:10.546 [2024-12-09 05:14:47.104707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:10.546 [2024-12-09 05:14:47.104716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:10.546 [2024-12-09 05:14:47.104724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:10.546 [2024-12-09 05:14:47.104730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:10.546 [2024-12-09 05:14:47.104736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:10.546 [2024-12-09 05:14:47.104743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:10.546 [2024-12-09 05:14:47.104750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:10.546 [2024-12-09 05:14:47.104953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.546 [2024-12-09 05:14:47.104970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0c150 with addr=10.0.0.2, port=4420 00:20:10.546 [2024-12-09 05:14:47.104980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0c150 is same with the state(6) to be set 00:20:10.546 [2024-12-09 05:14:47.105151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.546 [2024-12-09 05:14:47.105163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c12640 with addr=10.0.0.2, port=4420 00:20:10.546 [2024-12-09 05:14:47.105171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c12640 is same with the state(6) to be set 00:20:10.546 [2024-12-09 05:14:47.105321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.546 [2024-12-09 05:14:47.105333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e7d30 with addr=10.0.0.2, port=4420 00:20:10.546 [2024-12-09 05:14:47.105340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e7d30 is same with the state(6) to be set 00:20:10.546 [2024-12-09 05:14:47.105556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.546 [2024-12-09 05:14:47.105567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dc200 with addr=10.0.0.2, port=4420 00:20:10.546 [2024-12-09 05:14:47.105575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dc200 is same with the state(6) to be set 00:20:10.546 [2024-12-09 05:14:47.105614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0c150 (9): Bad file descriptor 00:20:10.546 [2024-12-09 05:14:47.105626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c12640 (9): Bad file descriptor 00:20:10.546 [2024-12-09 05:14:47.105636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e7d30 (9): Bad file descriptor 00:20:10.546 [2024-12-09 05:14:47.105644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dc200 (9): Bad file descriptor 00:20:10.546 [2024-12-09 05:14:47.105673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:10.546 [2024-12-09 05:14:47.105682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:10.546 [2024-12-09 05:14:47.105691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:10.546 [2024-12-09 05:14:47.105699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:10.546 [2024-12-09 05:14:47.105707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:10.546 [2024-12-09 05:14:47.105713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:10.546 [2024-12-09 05:14:47.105722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:10.546 [2024-12-09 05:14:47.105730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:10.546 [2024-12-09 05:14:47.105738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:10.546 [2024-12-09 05:14:47.105745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:10.546 [2024-12-09 05:14:47.105752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:10.546 [2024-12-09 05:14:47.105758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:10.546 [2024-12-09 05:14:47.105771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:10.546 [2024-12-09 05:14:47.105779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:10.546 [2024-12-09 05:14:47.105787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:10.546 [2024-12-09 05:14:47.105794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:10.805 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3638375 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3638375 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3638375 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:12.181 rmmod nvme_tcp 00:20:12.181 rmmod nvme_fabrics 00:20:12.181 rmmod nvme_keyring 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3638101 ']' 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3638101 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3638101 ']' 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3638101 00:20:12.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3638101) - No such process 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3638101 is not found' 00:20:12.181 Process with pid 3638101 is not found 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.181 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:14.085 00:20:14.085 real 0m7.133s 00:20:14.085 user 0m16.356s 00:20:14.085 sys 0m1.237s 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:14.085 ************************************ 00:20:14.085 END TEST nvmf_shutdown_tc3 00:20:14.085 ************************************ 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:14.085 ************************************ 00:20:14.085 START TEST nvmf_shutdown_tc4 00:20:14.085 ************************************ 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.085 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:14.086 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:14.086 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:14.086 Found net devices under 0000:86:00.0: cvl_0_0 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:14.086 Found net devices under 0000:86:00.1: cvl_0_1 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:14.086 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:14.345 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:14.345 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:14.345 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:14.345 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.345 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.345 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.345 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:14.345 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:14.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:20:14.345 00:20:14.345 --- 10.0.0.2 ping statistics --- 00:20:14.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.345 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:20:14.345 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:20:14.345 00:20:14.345 --- 10.0.0.1 ping statistics --- 00:20:14.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.345 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:20:14.345 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.345 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:20:14.345 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:14.345 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.345 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:14.345 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:14.345 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.345 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:14.345 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:14.345 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:14.346 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:14.346 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:14.346 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:14.346 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3639416 00:20:14.346 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3639416 00:20:14.346 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:14.346 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3639416 ']' 00:20:14.346 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.346 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.346 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.346 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.346 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:14.605 [2024-12-09 05:14:51.031278] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:20:14.605 [2024-12-09 05:14:51.031328] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.605 [2024-12-09 05:14:51.098410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:14.605 [2024-12-09 05:14:51.138235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.605 [2024-12-09 05:14:51.138274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.605 [2024-12-09 05:14:51.138281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.605 [2024-12-09 05:14:51.138287] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.605 [2024-12-09 05:14:51.138292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.605 [2024-12-09 05:14:51.139893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.605 [2024-12-09 05:14:51.139963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.605 [2024-12-09 05:14:51.140067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.605 [2024-12-09 05:14:51.140068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:14.605 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.605 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:20:14.605 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:14.605 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:14.605 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:14.865 [2024-12-09 05:14:51.286918] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:14.865 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:14.866 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.866 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:14.866 Malloc1 00:20:14.866 [2024-12-09 05:14:51.403717] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.866 Malloc2 00:20:14.866 Malloc3 00:20:15.124 Malloc4 00:20:15.124 Malloc5 00:20:15.124 Malloc6 00:20:15.124 Malloc7 00:20:15.124 Malloc8 00:20:15.124 Malloc9 00:20:15.383 Malloc10 00:20:15.383 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.383 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:15.383 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:15.383 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:15.383 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3639686 00:20:15.383 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:20:15.383 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:20:15.383 [2024-12-09 05:14:51.908650] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:20.660 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:20.660 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3639416 00:20:20.660 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3639416 ']' 00:20:20.660 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3639416 00:20:20.660 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:20:20.660 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.660 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3639416 00:20:20.660 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:20.660 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:20.660 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3639416' 00:20:20.660 killing process with pid 3639416 00:20:20.660 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3639416 00:20:20.660 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3639416 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 [2024-12-09 05:14:56.915742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.660 Write completed with error (sct=0, sc=8) 00:20:20.660 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 [2024-12-09 05:14:56.916713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 [2024-12-09 05:14:56.917718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.661 Write completed with error (sct=0, sc=8) 00:20:20.661 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 [2024-12-09 05:14:56.919341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:20.662 NVMe io qpair process completion error 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 [2024-12-09 05:14:56.920307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 Write completed with error (sct=0, sc=8) 00:20:20.662 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 [2024-12-09 05:14:56.921198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 [2024-12-09 05:14:56.922254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.663 Write completed with error (sct=0, sc=8) 00:20:20.663 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 [2024-12-09 05:14:56.923094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2663f90 is same with the state(6) to be set 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 [2024-12-09 05:14:56.923136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2663f90 is same with the state(6) to be set 00:20:20.664 [2024-12-09 05:14:56.923145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2663f90 is same with the state(6) to be set 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 [2024-12-09 05:14:56.923153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2663f90 is same with the state(6) to be set 00:20:20.664 starting I/O failed: -6 00:20:20.664 [2024-12-09 05:14:56.923161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2663f90 is same with the state(6) to be set 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 [2024-12-09 05:14:56.924100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:20.664 NVMe io qpair process completion error 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.664 starting I/O failed: -6 00:20:20.664 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 [2024-12-09 05:14:56.925161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 [2024-12-09 05:14:56.926064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.665 starting I/O failed: -6 00:20:20.665 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 [2024-12-09 05:14:56.927047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 [2024-12-09 05:14:56.928548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:20.666 NVMe io qpair process completion error 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.666 starting I/O failed: -6 00:20:20.666 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 [2024-12-09 05:14:56.930255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 [2024-12-09 05:14:56.931169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 Write completed with error (sct=0, sc=8) 00:20:20.667 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 [2024-12-09 05:14:56.932211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.668 starting I/O failed: -6 00:20:20.668 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 [2024-12-09 05:14:56.934290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:20.669 NVMe io qpair process completion error 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 [2024-12-09 05:14:56.935366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.669 Write completed with error (sct=0, sc=8) 00:20:20.669 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 [2024-12-09 05:14:56.936277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 [2024-12-09 05:14:56.937323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.670 starting I/O failed: -6 00:20:20.670 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 [2024-12-09 05:14:56.940628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:20.671 NVMe io qpair process completion error 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 [2024-12-09 05:14:56.941586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 Write completed with error (sct=0, sc=8) 00:20:20.671 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 [2024-12-09 05:14:56.942499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:20.672 starting I/O failed: -6 00:20:20.672 starting I/O failed: -6 00:20:20.672 starting I/O failed: -6 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 [2024-12-09 05:14:56.943705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.672 starting I/O failed: -6 00:20:20.672 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 [2024-12-09 05:14:56.946724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:20.673 NVMe io qpair process completion error 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 [2024-12-09 05:14:56.947720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 starting I/O failed: -6 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.673 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 [2024-12-09 05:14:56.948610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 [2024-12-09 05:14:56.949619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.674 Write completed with error (sct=0, sc=8) 00:20:20.674 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 [2024-12-09 05:14:56.951260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:20.675 NVMe io qpair process completion error 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 Write completed with error (sct=0, sc=8) 00:20:20.675 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 [2024-12-09 05:14:56.952329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 [2024-12-09 05:14:56.953206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 Write completed with error (sct=0, sc=8) 00:20:20.676 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 [2024-12-09 05:14:56.954209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 [2024-12-09 05:14:56.956348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:20.677 NVMe io qpair process completion error 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 starting I/O failed: -6 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.677 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 [2024-12-09 05:14:56.957321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:20.678 starting I/O failed: -6 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 [2024-12-09 05:14:56.958130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.678 starting I/O failed: -6 00:20:20.678 Write completed with error (sct=0, sc=8) 00:20:20.679 [2024-12-09 05:14:56.959201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 [2024-12-09 05:14:56.963710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:20.679 NVMe io qpair process completion error 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 starting I/O failed: -6 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.679 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 [2024-12-09 05:14:56.964754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 [2024-12-09 05:14:56.965645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.680 Write completed with error (sct=0, sc=8) 00:20:20.680 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 [2024-12-09 05:14:56.966682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 Write completed with error (sct=0, sc=8) 00:20:20.681 starting I/O failed: -6 00:20:20.681 [2024-12-09 05:14:56.970135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:20.681 NVMe io qpair process completion error 00:20:20.681 Initializing NVMe Controllers 00:20:20.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:20:20.681 Controller IO queue size 128, less than required. 00:20:20.681 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:20.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:20:20.682 Controller IO queue size 128, less than required. 00:20:20.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:20.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:20:20.682 Controller IO queue size 128, less than required. 00:20:20.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:20.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:20:20.682 Controller IO queue size 128, less than required. 00:20:20.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:20.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:20:20.682 Controller IO queue size 128, less than required. 00:20:20.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:20.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:20:20.682 Controller IO queue size 128, less than required. 00:20:20.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:20.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:20:20.682 Controller IO queue size 128, less than required. 00:20:20.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:20.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:20:20.682 Controller IO queue size 128, less than required. 00:20:20.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:20.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:20.682 Controller IO queue size 128, less than required. 00:20:20.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:20.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:20:20.682 Controller IO queue size 128, less than required. 00:20:20.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:20.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:20:20.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:20:20.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:20:20.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:20:20.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:20:20.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:20:20.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:20:20.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:20:20.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:20.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:20:20.682 Initialization complete. Launching workers. 00:20:20.682 ======================================================== 00:20:20.682 Latency(us) 00:20:20.682 Device Information : IOPS MiB/s Average min max 00:20:20.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2145.86 92.20 59654.58 715.73 97478.85 00:20:20.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2143.08 92.09 59751.10 916.12 111316.85 00:20:20.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2142.87 92.08 59776.43 912.03 109252.59 00:20:20.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2144.15 92.13 59773.45 706.46 108450.91 00:20:20.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2140.74 91.98 59897.61 746.83 110797.74 00:20:20.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2165.08 93.03 59235.40 891.28 112960.69 00:20:20.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2190.71 94.13 58560.25 636.86 102781.30 00:20:20.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2158.04 92.73 59494.47 913.29 120572.33 00:20:20.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2170.00 93.24 58503.65 982.09 100448.39 00:20:20.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2178.75 93.62 58278.65 629.56 99331.55 00:20:20.682 ======================================================== 00:20:20.682 Total : 21579.29 927.24 59288.36 629.56 120572.33 00:20:20.682 00:20:20.682 [2024-12-09 05:14:56.973131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcceef0 is same with the state(6) to be set 00:20:20.682 [2024-12-09 05:14:56.973178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xccf740 is same with the state(6) to be set 00:20:20.682 [2024-12-09 05:14:56.973209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd0900 is same with the state(6) to be set 00:20:20.682 [2024-12-09 05:14:56.973240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcce560 is same with the state(6) to be set 00:20:20.682 [2024-12-09 05:14:56.973268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xccebc0 is same with the state(6) to be set 00:20:20.682 [2024-12-09 05:14:56.973297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcce890 is same with the state(6) to be set 00:20:20.682 [2024-12-09 05:14:56.973326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xccfa70 is same with the state(6) to be set 00:20:20.682 [2024-12-09 05:14:56.973354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xccf410 is same with the state(6) to be set 00:20:20.682 [2024-12-09 05:14:56.973382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd0720 is same with the state(6) to be set 00:20:20.682 [2024-12-09 05:14:56.973411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd0ae0 is same with the state(6) to be set 00:20:20.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:20:20.940 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3639686 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3639686 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3639686 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:21.875 rmmod nvme_tcp 00:20:21.875 rmmod nvme_fabrics 00:20:21.875 rmmod nvme_keyring 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:20:21.875 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:20:21.876 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3639416 ']' 00:20:21.876 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3639416 00:20:21.876 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3639416 ']' 00:20:21.876 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3639416 00:20:21.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3639416) - No such process 00:20:21.876 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3639416 is not found' 00:20:21.876 Process with pid 3639416 is not found 00:20:21.876 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:21.876 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:21.876 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:21.876 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:20:21.876 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:20:21.876 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:21.876 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:20:21.876 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:21.876 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:21.876 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.876 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.876 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:24.406 00:20:24.406 real 0m9.801s 00:20:24.406 user 0m25.170s 00:20:24.406 sys 0m5.040s 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:24.406 ************************************ 00:20:24.406 END TEST nvmf_shutdown_tc4 00:20:24.406 ************************************ 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:20:24.406 00:20:24.406 real 0m40.018s 00:20:24.406 user 1m37.531s 00:20:24.406 sys 0m13.628s 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:24.406 ************************************ 00:20:24.406 END TEST nvmf_shutdown 00:20:24.406 ************************************ 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:24.406 ************************************ 00:20:24.406 START TEST nvmf_nsid 00:20:24.406 ************************************ 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:24.406 * Looking for test storage... 00:20:24.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:24.406 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:24.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.407 --rc genhtml_branch_coverage=1 00:20:24.407 --rc genhtml_function_coverage=1 00:20:24.407 --rc genhtml_legend=1 00:20:24.407 --rc geninfo_all_blocks=1 00:20:24.407 --rc geninfo_unexecuted_blocks=1 00:20:24.407 00:20:24.407 ' 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:24.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.407 --rc genhtml_branch_coverage=1 00:20:24.407 --rc genhtml_function_coverage=1 00:20:24.407 --rc genhtml_legend=1 00:20:24.407 --rc geninfo_all_blocks=1 00:20:24.407 --rc geninfo_unexecuted_blocks=1 00:20:24.407 00:20:24.407 ' 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:24.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.407 --rc genhtml_branch_coverage=1 00:20:24.407 --rc genhtml_function_coverage=1 00:20:24.407 --rc genhtml_legend=1 00:20:24.407 --rc geninfo_all_blocks=1 00:20:24.407 --rc geninfo_unexecuted_blocks=1 00:20:24.407 00:20:24.407 ' 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:24.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.407 --rc genhtml_branch_coverage=1 00:20:24.407 --rc genhtml_function_coverage=1 00:20:24.407 --rc genhtml_legend=1 00:20:24.407 --rc geninfo_all_blocks=1 00:20:24.407 --rc geninfo_unexecuted_blocks=1 00:20:24.407 00:20:24.407 ' 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:24.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:20:24.407 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:29.782 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:29.782 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:20:29.782 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:29.782 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:29.782 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:29.782 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:29.782 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:29.782 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:20:29.782 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:29.782 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:20:29.782 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:20:29.782 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:20:29.782 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:20:29.782 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:20:29.782 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:29.783 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:29.783 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:29.783 Found net devices under 0000:86:00.0: cvl_0_0 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:29.783 Found net devices under 0000:86:00.1: cvl_0_1 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:29.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:20:29.783 00:20:29.783 --- 10.0.0.2 ping statistics --- 00:20:29.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.783 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:29.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:20:29.783 00:20:29.783 --- 10.0.0.1 ping statistics --- 00:20:29.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.783 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3644268 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3644268 00:20:29.783 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3644268 ']' 00:20:29.784 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.784 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.784 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.784 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.784 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:29.784 [2024-12-09 05:15:06.402882] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:20:29.784 [2024-12-09 05:15:06.402925] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.042 [2024-12-09 05:15:06.470875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.042 [2024-12-09 05:15:06.513639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.042 [2024-12-09 05:15:06.513675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.042 [2024-12-09 05:15:06.513683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.042 [2024-12-09 05:15:06.513689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.042 [2024-12-09 05:15:06.513694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.042 [2024-12-09 05:15:06.514268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3644291 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=2104ba39-e0ce-47d4-b775-ac03343bfa5b 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=b86e6b15-d9b9-48a8-81a6-89f70b9c40a9 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=e4159ed0-e19c-4538-aa88-b42d11d78f6e 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.042 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:30.301 null0 00:20:30.301 null1 00:20:30.301 [2024-12-09 05:15:06.699700] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:20:30.301 [2024-12-09 05:15:06.699746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644291 ] 00:20:30.301 null2 00:20:30.301 [2024-12-09 05:15:06.707958] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.301 [2024-12-09 05:15:06.732177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.301 [2024-12-09 05:15:06.764580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.301 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.301 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3644291 /var/tmp/tgt2.sock 00:20:30.301 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3644291 ']' 00:20:30.301 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:20:30.301 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.301 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:20:30.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:20:30.301 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.301 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:30.301 [2024-12-09 05:15:06.808582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.559 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.559 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:30.559 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:20:30.817 [2024-12-09 05:15:07.345205] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.817 [2024-12-09 05:15:07.361314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:20:30.817 nvme0n1 nvme0n2 00:20:30.817 nvme1n1 00:20:30.817 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:20:30.817 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:20:30.817 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:32.187 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:20:32.187 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:20:32.187 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:20:32.187 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:20:32.187 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:20:32.187 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:20:32.187 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:20:32.187 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:32.187 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:32.187 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:32.187 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:20:32.187 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:20:32.187 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 2104ba39-e0ce-47d4-b775-ac03343bfa5b 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2104ba39e0ce47d4b775ac03343bfa5b 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2104BA39E0CE47D4B775AC03343BFA5B 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 2104BA39E0CE47D4B775AC03343BFA5B == \2\1\0\4\B\A\3\9\E\0\C\E\4\7\D\4\B\7\7\5\A\C\0\3\3\4\3\B\F\A\5\B ]] 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid b86e6b15-d9b9-48a8-81a6-89f70b9c40a9 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b86e6b15d9b948a881a689f70b9c40a9 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B86E6B15D9B948A881A689F70B9C40A9 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ B86E6B15D9B948A881A689F70B9C40A9 == \B\8\6\E\6\B\1\5\D\9\B\9\4\8\A\8\8\1\A\6\8\9\F\7\0\B\9\C\4\0\A\9 ]] 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid e4159ed0-e19c-4538-aa88-b42d11d78f6e 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e4159ed0e19c4538aa88b42d11d78f6e 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E4159ED0E19C4538AA88B42D11D78F6E 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ E4159ED0E19C4538AA88B42D11D78F6E == \E\4\1\5\9\E\D\0\E\1\9\C\4\5\3\8\A\A\8\8\B\4\2\D\1\1\D\7\8\F\6\E ]] 00:20:33.122 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:20:33.380 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:20:33.380 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:20:33.380 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3644291 00:20:33.380 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3644291 ']' 00:20:33.380 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3644291 00:20:33.380 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:33.380 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.380 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3644291 00:20:33.380 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:33.380 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:33.380 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3644291' 00:20:33.380 killing process with pid 3644291 00:20:33.380 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3644291 00:20:33.380 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3644291 00:20:33.945 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:20:33.946 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:33.946 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:20:33.946 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:33.946 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:20:33.946 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:33.946 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:33.946 rmmod nvme_tcp 00:20:33.946 rmmod nvme_fabrics 00:20:33.946 rmmod nvme_keyring 00:20:33.946 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:33.946 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:20:33.946 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:20:33.946 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3644268 ']' 00:20:33.946 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3644268 00:20:33.946 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3644268 ']' 00:20:33.946 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3644268 00:20:33.946 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:33.946 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.946 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3644268 00:20:33.946 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:33.946 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:33.946 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3644268' 00:20:33.946 killing process with pid 3644268 00:20:33.946 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3644268 00:20:33.946 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3644268 00:20:34.204 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:34.204 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:34.204 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:34.204 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:20:34.204 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:20:34.204 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:34.204 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:20:34.204 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:34.204 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:34.204 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.204 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.204 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.106 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:36.106 00:20:36.106 real 0m12.093s 00:20:36.106 user 0m9.720s 00:20:36.106 sys 0m5.200s 00:20:36.106 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:36.106 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:36.106 ************************************ 00:20:36.106 END TEST nvmf_nsid 00:20:36.106 ************************************ 00:20:36.106 05:15:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:36.106 00:20:36.106 real 11m45.593s 00:20:36.106 user 25m26.722s 00:20:36.106 sys 3m34.052s 00:20:36.106 05:15:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:36.106 05:15:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:36.107 ************************************ 00:20:36.107 END TEST nvmf_target_extra 00:20:36.107 ************************************ 00:20:36.107 05:15:12 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:36.107 05:15:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:36.107 05:15:12 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.107 05:15:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:36.365 ************************************ 00:20:36.365 START TEST nvmf_host 00:20:36.365 ************************************ 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:36.365 * Looking for test storage... 00:20:36.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:36.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.365 --rc genhtml_branch_coverage=1 00:20:36.365 --rc genhtml_function_coverage=1 00:20:36.365 --rc genhtml_legend=1 00:20:36.365 --rc geninfo_all_blocks=1 00:20:36.365 --rc geninfo_unexecuted_blocks=1 00:20:36.365 00:20:36.365 ' 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:36.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.365 --rc genhtml_branch_coverage=1 00:20:36.365 --rc genhtml_function_coverage=1 00:20:36.365 --rc genhtml_legend=1 00:20:36.365 --rc geninfo_all_blocks=1 00:20:36.365 --rc geninfo_unexecuted_blocks=1 00:20:36.365 00:20:36.365 ' 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:36.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.365 --rc genhtml_branch_coverage=1 00:20:36.365 --rc genhtml_function_coverage=1 00:20:36.365 --rc genhtml_legend=1 00:20:36.365 --rc geninfo_all_blocks=1 00:20:36.365 --rc geninfo_unexecuted_blocks=1 00:20:36.365 00:20:36.365 ' 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:36.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.365 --rc genhtml_branch_coverage=1 00:20:36.365 --rc genhtml_function_coverage=1 00:20:36.365 --rc genhtml_legend=1 00:20:36.365 --rc geninfo_all_blocks=1 00:20:36.365 --rc geninfo_unexecuted_blocks=1 00:20:36.365 00:20:36.365 ' 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:36.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.365 05:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.623 ************************************ 00:20:36.623 START TEST nvmf_multicontroller 00:20:36.623 ************************************ 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:36.623 * Looking for test storage... 00:20:36.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:36.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.623 --rc genhtml_branch_coverage=1 00:20:36.623 --rc genhtml_function_coverage=1 00:20:36.623 --rc genhtml_legend=1 00:20:36.623 --rc geninfo_all_blocks=1 00:20:36.623 --rc geninfo_unexecuted_blocks=1 00:20:36.623 00:20:36.623 ' 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:36.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.623 --rc genhtml_branch_coverage=1 00:20:36.623 --rc genhtml_function_coverage=1 00:20:36.623 --rc genhtml_legend=1 00:20:36.623 --rc geninfo_all_blocks=1 00:20:36.623 --rc geninfo_unexecuted_blocks=1 00:20:36.623 00:20:36.623 ' 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:36.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.623 --rc genhtml_branch_coverage=1 00:20:36.623 --rc genhtml_function_coverage=1 00:20:36.623 --rc genhtml_legend=1 00:20:36.623 --rc geninfo_all_blocks=1 00:20:36.623 --rc geninfo_unexecuted_blocks=1 00:20:36.623 00:20:36.623 ' 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:36.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.623 --rc genhtml_branch_coverage=1 00:20:36.623 --rc genhtml_function_coverage=1 00:20:36.623 --rc genhtml_legend=1 00:20:36.623 --rc geninfo_all_blocks=1 00:20:36.623 --rc geninfo_unexecuted_blocks=1 00:20:36.623 00:20:36.623 ' 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:36.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.623 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:36.624 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:36.624 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:36.624 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.624 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.624 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.624 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:36.624 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:36.624 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:20:36.624 05:15:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:41.892 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:41.892 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:41.892 Found net devices under 0000:86:00.0: cvl_0_0 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:41.892 Found net devices under 0000:86:00.1: cvl_0_1 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:41.892 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:41.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:41.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:20:41.893 00:20:41.893 --- 10.0.0.2 ping statistics --- 00:20:41.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.893 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:41.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:41.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:20:41.893 00:20:41.893 --- 10.0.0.1 ping statistics --- 00:20:41.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.893 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3648887 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3648887 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3648887 ']' 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.893 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:41.893 [2024-12-09 05:15:18.408856] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:20:41.893 [2024-12-09 05:15:18.408901] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.893 [2024-12-09 05:15:18.476649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:41.893 [2024-12-09 05:15:18.522212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.893 [2024-12-09 05:15:18.522248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.893 [2024-12-09 05:15:18.522256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:41.893 [2024-12-09 05:15:18.522263] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:41.893 [2024-12-09 05:15:18.522269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.893 [2024-12-09 05:15:18.523640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.893 [2024-12-09 05:15:18.523725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:41.893 [2024-12-09 05:15:18.523727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.152 [2024-12-09 05:15:18.673931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.152 Malloc0 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.152 [2024-12-09 05:15:18.739025] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.152 [2024-12-09 05:15:18.746939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.152 Malloc1 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.152 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.412 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.412 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:42.412 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.412 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.412 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.412 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3649014 00:20:42.412 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:42.412 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:42.412 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3649014 /var/tmp/bdevperf.sock 00:20:42.412 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3649014 ']' 00:20:42.412 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.412 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.412 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.412 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.412 05:15:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.670 NVMe0n1 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.670 1 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.670 request: 00:20:42.670 { 00:20:42.670 "name": "NVMe0", 00:20:42.670 "trtype": "tcp", 00:20:42.670 "traddr": "10.0.0.2", 00:20:42.670 "adrfam": "ipv4", 00:20:42.670 "trsvcid": "4420", 00:20:42.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.670 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:42.670 "hostaddr": "10.0.0.1", 00:20:42.670 "prchk_reftag": false, 00:20:42.670 "prchk_guard": false, 00:20:42.670 "hdgst": false, 00:20:42.670 "ddgst": false, 00:20:42.670 "allow_unrecognized_csi": false, 00:20:42.670 "method": "bdev_nvme_attach_controller", 00:20:42.670 "req_id": 1 00:20:42.670 } 00:20:42.670 Got JSON-RPC error response 00:20:42.670 response: 00:20:42.670 { 00:20:42.670 "code": -114, 00:20:42.670 "message": "A controller named NVMe0 already exists with the specified network path" 00:20:42.670 } 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:42.670 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.671 request: 00:20:42.671 { 00:20:42.671 "name": "NVMe0", 00:20:42.671 "trtype": "tcp", 00:20:42.671 "traddr": "10.0.0.2", 00:20:42.671 "adrfam": "ipv4", 00:20:42.671 "trsvcid": "4420", 00:20:42.671 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:42.671 "hostaddr": "10.0.0.1", 00:20:42.671 "prchk_reftag": false, 00:20:42.671 "prchk_guard": false, 00:20:42.671 "hdgst": false, 00:20:42.671 "ddgst": false, 00:20:42.671 "allow_unrecognized_csi": false, 00:20:42.671 "method": "bdev_nvme_attach_controller", 00:20:42.671 "req_id": 1 00:20:42.671 } 00:20:42.671 Got JSON-RPC error response 00:20:42.671 response: 00:20:42.671 { 00:20:42.671 "code": -114, 00:20:42.671 "message": "A controller named NVMe0 already exists with the specified network path" 00:20:42.671 } 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:42.671 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:42.929 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.929 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:42.929 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.929 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:42.929 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.929 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.929 request: 00:20:42.929 { 00:20:42.929 "name": "NVMe0", 00:20:42.929 "trtype": "tcp", 00:20:42.929 "traddr": "10.0.0.2", 00:20:42.929 "adrfam": "ipv4", 00:20:42.929 "trsvcid": "4420", 00:20:42.929 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.929 "hostaddr": "10.0.0.1", 00:20:42.929 "prchk_reftag": false, 00:20:42.929 "prchk_guard": false, 00:20:42.929 "hdgst": false, 00:20:42.929 "ddgst": false, 00:20:42.929 "multipath": "disable", 00:20:42.929 "allow_unrecognized_csi": false, 00:20:42.929 "method": "bdev_nvme_attach_controller", 00:20:42.929 "req_id": 1 00:20:42.929 } 00:20:42.929 Got JSON-RPC error response 00:20:42.929 response: 00:20:42.929 { 00:20:42.929 "code": -114, 00:20:42.929 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:20:42.929 } 00:20:42.929 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.930 request: 00:20:42.930 { 00:20:42.930 "name": "NVMe0", 00:20:42.930 "trtype": "tcp", 00:20:42.930 "traddr": "10.0.0.2", 00:20:42.930 "adrfam": "ipv4", 00:20:42.930 "trsvcid": "4420", 00:20:42.930 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.930 "hostaddr": "10.0.0.1", 00:20:42.930 "prchk_reftag": false, 00:20:42.930 "prchk_guard": false, 00:20:42.930 "hdgst": false, 00:20:42.930 "ddgst": false, 00:20:42.930 "multipath": "failover", 00:20:42.930 "allow_unrecognized_csi": false, 00:20:42.930 "method": "bdev_nvme_attach_controller", 00:20:42.930 "req_id": 1 00:20:42.930 } 00:20:42.930 Got JSON-RPC error response 00:20:42.930 response: 00:20:42.930 { 00:20:42.930 "code": -114, 00:20:42.930 "message": "A controller named NVMe0 already exists with the specified network path" 00:20:42.930 } 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.930 NVMe0n1 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.930 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:43.188 00:20:43.188 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.188 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:43.188 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:43.188 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.188 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:43.188 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.188 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:43.188 05:15:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:44.581 { 00:20:44.581 "results": [ 00:20:44.581 { 00:20:44.581 "job": "NVMe0n1", 00:20:44.581 "core_mask": "0x1", 00:20:44.581 "workload": "write", 00:20:44.581 "status": "finished", 00:20:44.581 "queue_depth": 128, 00:20:44.581 "io_size": 4096, 00:20:44.581 "runtime": 1.008187, 00:20:44.581 "iops": 24126.4765365949, 00:20:44.581 "mibps": 94.24404897107382, 00:20:44.581 "io_failed": 0, 00:20:44.581 "io_timeout": 0, 00:20:44.581 "avg_latency_us": 5298.349357299643, 00:20:44.581 "min_latency_us": 1517.3008695652175, 00:20:44.581 "max_latency_us": 10656.72347826087 00:20:44.581 } 00:20:44.581 ], 00:20:44.581 "core_count": 1 00:20:44.581 } 00:20:44.581 05:15:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:44.581 05:15:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.581 05:15:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.581 05:15:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.581 05:15:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:20:44.581 05:15:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3649014 00:20:44.581 05:15:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3649014 ']' 00:20:44.581 05:15:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3649014 00:20:44.581 05:15:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:20:44.581 05:15:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.581 05:15:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3649014 00:20:44.581 05:15:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:44.581 05:15:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:44.581 05:15:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3649014' 00:20:44.581 killing process with pid 3649014 00:20:44.581 05:15:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3649014 00:20:44.581 05:15:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3649014 00:20:44.581 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:44.581 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.581 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.581 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.581 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:44.581 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.581 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.581 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.581 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:20:44.581 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:44.581 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:20:44.581 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:44.581 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:20:44.581 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:20:44.581 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:44.581 [2024-12-09 05:15:18.850027] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:20:44.581 [2024-12-09 05:15:18.850075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649014 ] 00:20:44.581 [2024-12-09 05:15:18.914716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.581 [2024-12-09 05:15:18.956127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.581 [2024-12-09 05:15:19.662725] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name ccfd692d-8514-485a-8e50-9720ee3af686 already exists 00:20:44.581 [2024-12-09 05:15:19.662753] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:ccfd692d-8514-485a-8e50-9720ee3af686 alias for bdev NVMe1n1 00:20:44.581 [2024-12-09 05:15:19.662761] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:44.581 Running I/O for 1 seconds... 00:20:44.581 24069.00 IOPS, 94.02 MiB/s 00:20:44.581 Latency(us) 00:20:44.581 [2024-12-09T04:15:21.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.581 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:44.581 NVMe0n1 : 1.01 24126.48 94.24 0.00 0.00 5298.35 1517.30 10656.72 00:20:44.581 [2024-12-09T04:15:21.227Z] =================================================================================================================== 00:20:44.581 [2024-12-09T04:15:21.227Z] Total : 24126.48 94.24 0.00 0.00 5298.35 1517.30 10656.72 00:20:44.581 Received shutdown signal, test time was about 1.000000 seconds 00:20:44.581 00:20:44.581 Latency(us) 00:20:44.581 [2024-12-09T04:15:21.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.581 [2024-12-09T04:15:21.227Z] =================================================================================================================== 00:20:44.581 [2024-12-09T04:15:21.227Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:44.582 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:44.582 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:44.582 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:20:44.582 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:20:44.582 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:44.582 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:20:44.582 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:44.582 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:20:44.582 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:44.582 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:44.582 rmmod nvme_tcp 00:20:44.582 rmmod nvme_fabrics 00:20:44.582 rmmod nvme_keyring 00:20:44.582 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:44.582 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:20:44.582 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:20:44.582 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3648887 ']' 00:20:44.582 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3648887 00:20:44.582 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3648887 ']' 00:20:44.582 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3648887 00:20:44.582 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:20:44.582 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.582 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3648887 00:20:44.840 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:44.840 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:44.840 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3648887' 00:20:44.840 killing process with pid 3648887 00:20:44.840 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3648887 00:20:44.840 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3648887 00:20:44.840 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:44.840 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:44.840 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:44.840 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:20:44.840 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:20:44.840 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:45.098 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:20:45.098 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:45.098 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:45.098 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.098 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:45.098 05:15:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.996 05:15:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:46.996 00:20:46.996 real 0m10.541s 00:20:46.996 user 0m12.485s 00:20:46.996 sys 0m4.494s 00:20:46.996 05:15:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.996 05:15:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:46.996 ************************************ 00:20:46.996 END TEST nvmf_multicontroller 00:20:46.996 ************************************ 00:20:46.996 05:15:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:46.996 05:15:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:46.997 05:15:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.997 05:15:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.997 ************************************ 00:20:46.997 START TEST nvmf_aer 00:20:46.997 ************************************ 00:20:46.997 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:47.254 * Looking for test storage... 00:20:47.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:47.254 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:47.254 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:20:47.254 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:47.254 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:47.254 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:47.254 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:47.254 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:47.254 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:20:47.254 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:47.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.255 --rc genhtml_branch_coverage=1 00:20:47.255 --rc genhtml_function_coverage=1 00:20:47.255 --rc genhtml_legend=1 00:20:47.255 --rc geninfo_all_blocks=1 00:20:47.255 --rc geninfo_unexecuted_blocks=1 00:20:47.255 00:20:47.255 ' 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:47.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.255 --rc genhtml_branch_coverage=1 00:20:47.255 --rc genhtml_function_coverage=1 00:20:47.255 --rc genhtml_legend=1 00:20:47.255 --rc geninfo_all_blocks=1 00:20:47.255 --rc geninfo_unexecuted_blocks=1 00:20:47.255 00:20:47.255 ' 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:47.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.255 --rc genhtml_branch_coverage=1 00:20:47.255 --rc genhtml_function_coverage=1 00:20:47.255 --rc genhtml_legend=1 00:20:47.255 --rc geninfo_all_blocks=1 00:20:47.255 --rc geninfo_unexecuted_blocks=1 00:20:47.255 00:20:47.255 ' 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:47.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.255 --rc genhtml_branch_coverage=1 00:20:47.255 --rc genhtml_function_coverage=1 00:20:47.255 --rc genhtml_legend=1 00:20:47.255 --rc geninfo_all_blocks=1 00:20:47.255 --rc geninfo_unexecuted_blocks=1 00:20:47.255 00:20:47.255 ' 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:47.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:20:47.255 05:15:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:52.530 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:52.530 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:52.530 Found net devices under 0000:86:00.0: cvl_0_0 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:52.530 Found net devices under 0000:86:00.1: cvl_0_1 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.530 05:15:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:52.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.414 ms 00:20:52.530 00:20:52.530 --- 10.0.0.2 ping statistics --- 00:20:52.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.530 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:20:52.530 00:20:52.530 --- 10.0.0.1 ping statistics --- 00:20:52.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.530 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3652779 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3652779 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3652779 ']' 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.530 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:52.789 [2024-12-09 05:15:29.212656] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:20:52.789 [2024-12-09 05:15:29.212703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.789 [2024-12-09 05:15:29.280493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:52.789 [2024-12-09 05:15:29.322442] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.789 [2024-12-09 05:15:29.322480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.789 [2024-12-09 05:15:29.322487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.789 [2024-12-09 05:15:29.322493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.789 [2024-12-09 05:15:29.322498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.789 [2024-12-09 05:15:29.324080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.789 [2024-12-09 05:15:29.324174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.789 [2024-12-09 05:15:29.324264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:52.789 [2024-12-09 05:15:29.324266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.789 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.789 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:20:52.789 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:52.789 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:52.789 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:53.047 [2024-12-09 05:15:29.470856] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:53.047 Malloc0 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:53.047 [2024-12-09 05:15:29.533060] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:53.047 [ 00:20:53.047 { 00:20:53.047 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:53.047 "subtype": "Discovery", 00:20:53.047 "listen_addresses": [], 00:20:53.047 "allow_any_host": true, 00:20:53.047 "hosts": [] 00:20:53.047 }, 00:20:53.047 { 00:20:53.047 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.047 "subtype": "NVMe", 00:20:53.047 "listen_addresses": [ 00:20:53.047 { 00:20:53.047 "trtype": "TCP", 00:20:53.047 "adrfam": "IPv4", 00:20:53.047 "traddr": "10.0.0.2", 00:20:53.047 "trsvcid": "4420" 00:20:53.047 } 00:20:53.047 ], 00:20:53.047 "allow_any_host": true, 00:20:53.047 "hosts": [], 00:20:53.047 "serial_number": "SPDK00000000000001", 00:20:53.047 "model_number": "SPDK bdev Controller", 00:20:53.047 "max_namespaces": 2, 00:20:53.047 "min_cntlid": 1, 00:20:53.047 "max_cntlid": 65519, 00:20:53.047 "namespaces": [ 00:20:53.047 { 00:20:53.047 "nsid": 1, 00:20:53.047 "bdev_name": "Malloc0", 00:20:53.047 "name": "Malloc0", 00:20:53.047 "nguid": "1D63B26A5D0049589A5005FDA4DEAB00", 00:20:53.047 "uuid": "1d63b26a-5d00-4958-9a50-05fda4deab00" 00:20:53.047 } 00:20:53.047 ] 00:20:53.047 } 00:20:53.047 ] 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3652943 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:20:53.047 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:20:53.305 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:53.305 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:20:53.305 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:20:53.305 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:20:53.305 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:53.305 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:53.305 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:20:53.305 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:53.305 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.305 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:53.305 Malloc1 00:20:53.305 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.305 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:53.305 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.305 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:53.305 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.305 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:53.305 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.305 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:53.305 Asynchronous Event Request test 00:20:53.305 Attaching to 10.0.0.2 00:20:53.305 Attached to 10.0.0.2 00:20:53.305 Registering asynchronous event callbacks... 00:20:53.305 Starting namespace attribute notice tests for all controllers... 00:20:53.305 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:53.305 aer_cb - Changed Namespace 00:20:53.305 Cleaning up... 00:20:53.305 [ 00:20:53.305 { 00:20:53.305 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:53.305 "subtype": "Discovery", 00:20:53.305 "listen_addresses": [], 00:20:53.305 "allow_any_host": true, 00:20:53.305 "hosts": [] 00:20:53.305 }, 00:20:53.305 { 00:20:53.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.306 "subtype": "NVMe", 00:20:53.306 "listen_addresses": [ 00:20:53.306 { 00:20:53.306 "trtype": "TCP", 00:20:53.306 "adrfam": "IPv4", 00:20:53.306 "traddr": "10.0.0.2", 00:20:53.306 "trsvcid": "4420" 00:20:53.306 } 00:20:53.306 ], 00:20:53.306 "allow_any_host": true, 00:20:53.306 "hosts": [], 00:20:53.306 "serial_number": "SPDK00000000000001", 00:20:53.306 "model_number": "SPDK bdev Controller", 00:20:53.306 "max_namespaces": 2, 00:20:53.306 "min_cntlid": 1, 00:20:53.306 "max_cntlid": 65519, 00:20:53.306 "namespaces": [ 00:20:53.306 { 00:20:53.306 "nsid": 1, 00:20:53.306 "bdev_name": "Malloc0", 00:20:53.306 "name": "Malloc0", 00:20:53.306 "nguid": "1D63B26A5D0049589A5005FDA4DEAB00", 00:20:53.306 "uuid": "1d63b26a-5d00-4958-9a50-05fda4deab00" 00:20:53.306 }, 00:20:53.306 { 00:20:53.306 "nsid": 2, 00:20:53.306 "bdev_name": "Malloc1", 00:20:53.306 "name": "Malloc1", 00:20:53.306 "nguid": "847C415809DB4B66BBCB32B92843FEFC", 00:20:53.306 "uuid": "847c4158-09db-4b66-bbcb-32b92843fefc" 00:20:53.306 } 00:20:53.306 ] 00:20:53.306 } 00:20:53.306 ] 00:20:53.306 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.306 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3652943 00:20:53.306 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:53.306 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.306 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:53.565 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.565 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:53.565 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.565 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:53.565 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.565 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:53.565 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.565 05:15:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:53.565 rmmod nvme_tcp 00:20:53.565 rmmod nvme_fabrics 00:20:53.565 rmmod nvme_keyring 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3652779 ']' 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3652779 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3652779 ']' 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3652779 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3652779 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3652779' 00:20:53.565 killing process with pid 3652779 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3652779 00:20:53.565 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3652779 00:20:53.823 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:53.823 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:53.823 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:53.823 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:20:53.823 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:20:53.823 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:20:53.823 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:53.823 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:53.823 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:53.823 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.823 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:53.823 05:15:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:56.354 00:20:56.354 real 0m8.764s 00:20:56.354 user 0m5.351s 00:20:56.354 sys 0m4.395s 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.354 ************************************ 00:20:56.354 END TEST nvmf_aer 00:20:56.354 ************************************ 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.354 ************************************ 00:20:56.354 START TEST nvmf_async_init 00:20:56.354 ************************************ 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:56.354 * Looking for test storage... 00:20:56.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:20:56.354 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:56.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.355 --rc genhtml_branch_coverage=1 00:20:56.355 --rc genhtml_function_coverage=1 00:20:56.355 --rc genhtml_legend=1 00:20:56.355 --rc geninfo_all_blocks=1 00:20:56.355 --rc geninfo_unexecuted_blocks=1 00:20:56.355 00:20:56.355 ' 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:56.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.355 --rc genhtml_branch_coverage=1 00:20:56.355 --rc genhtml_function_coverage=1 00:20:56.355 --rc genhtml_legend=1 00:20:56.355 --rc geninfo_all_blocks=1 00:20:56.355 --rc geninfo_unexecuted_blocks=1 00:20:56.355 00:20:56.355 ' 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:56.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.355 --rc genhtml_branch_coverage=1 00:20:56.355 --rc genhtml_function_coverage=1 00:20:56.355 --rc genhtml_legend=1 00:20:56.355 --rc geninfo_all_blocks=1 00:20:56.355 --rc geninfo_unexecuted_blocks=1 00:20:56.355 00:20:56.355 ' 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:56.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.355 --rc genhtml_branch_coverage=1 00:20:56.355 --rc genhtml_function_coverage=1 00:20:56.355 --rc genhtml_legend=1 00:20:56.355 --rc geninfo_all_blocks=1 00:20:56.355 --rc geninfo_unexecuted_blocks=1 00:20:56.355 00:20:56.355 ' 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:56.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=efcf3d937ff74b2b8e9da2b6d43d046f 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:20:56.355 05:15:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:01.623 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:01.623 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:01.623 Found net devices under 0000:86:00.0: cvl_0_0 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.623 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:01.624 Found net devices under 0000:86:00.1: cvl_0_1 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:01.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:21:01.624 00:21:01.624 --- 10.0.0.2 ping statistics --- 00:21:01.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.624 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:01.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:21:01.624 00:21:01.624 --- 10.0.0.1 ping statistics --- 00:21:01.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.624 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:01.624 05:15:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:01.624 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:01.624 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:01.624 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:01.624 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:01.624 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:01.624 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3656423 00:21:01.624 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3656423 00:21:01.624 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3656423 ']' 00:21:01.624 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.624 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:01.624 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.624 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:01.624 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:01.624 [2024-12-09 05:15:38.065795] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:21:01.624 [2024-12-09 05:15:38.065839] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.624 [2024-12-09 05:15:38.134959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.624 [2024-12-09 05:15:38.177502] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.624 [2024-12-09 05:15:38.177538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.624 [2024-12-09 05:15:38.177545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.624 [2024-12-09 05:15:38.177551] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.624 [2024-12-09 05:15:38.177557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.624 [2024-12-09 05:15:38.178129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:01.883 [2024-12-09 05:15:38.315560] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:01.883 null0 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g efcf3d937ff74b2b8e9da2b6d43d046f 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:01.883 [2024-12-09 05:15:38.367833] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.883 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.141 nvme0n1 00:21:02.141 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.141 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:02.141 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.141 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.141 [ 00:21:02.141 { 00:21:02.141 "name": "nvme0n1", 00:21:02.141 "aliases": [ 00:21:02.141 "efcf3d93-7ff7-4b2b-8e9d-a2b6d43d046f" 00:21:02.141 ], 00:21:02.141 "product_name": "NVMe disk", 00:21:02.141 "block_size": 512, 00:21:02.141 "num_blocks": 2097152, 00:21:02.141 "uuid": "efcf3d93-7ff7-4b2b-8e9d-a2b6d43d046f", 00:21:02.141 "numa_id": 1, 00:21:02.141 "assigned_rate_limits": { 00:21:02.141 "rw_ios_per_sec": 0, 00:21:02.141 "rw_mbytes_per_sec": 0, 00:21:02.141 "r_mbytes_per_sec": 0, 00:21:02.141 "w_mbytes_per_sec": 0 00:21:02.141 }, 00:21:02.141 "claimed": false, 00:21:02.141 "zoned": false, 00:21:02.141 "supported_io_types": { 00:21:02.141 "read": true, 00:21:02.141 "write": true, 00:21:02.141 "unmap": false, 00:21:02.141 "flush": true, 00:21:02.141 "reset": true, 00:21:02.141 "nvme_admin": true, 00:21:02.141 "nvme_io": true, 00:21:02.141 "nvme_io_md": false, 00:21:02.141 "write_zeroes": true, 00:21:02.141 "zcopy": false, 00:21:02.141 "get_zone_info": false, 00:21:02.141 "zone_management": false, 00:21:02.141 "zone_append": false, 00:21:02.141 "compare": true, 00:21:02.141 "compare_and_write": true, 00:21:02.141 "abort": true, 00:21:02.141 "seek_hole": false, 00:21:02.141 "seek_data": false, 00:21:02.141 "copy": true, 00:21:02.141 "nvme_iov_md": false 00:21:02.141 }, 00:21:02.141 "memory_domains": [ 00:21:02.141 { 00:21:02.141 "dma_device_id": "system", 00:21:02.141 "dma_device_type": 1 00:21:02.141 } 00:21:02.141 ], 00:21:02.141 "driver_specific": { 00:21:02.141 "nvme": [ 00:21:02.141 { 00:21:02.141 "trid": { 00:21:02.141 "trtype": "TCP", 00:21:02.141 "adrfam": "IPv4", 00:21:02.141 "traddr": "10.0.0.2", 00:21:02.141 "trsvcid": "4420", 00:21:02.141 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:02.141 }, 00:21:02.141 "ctrlr_data": { 00:21:02.141 "cntlid": 1, 00:21:02.141 "vendor_id": "0x8086", 00:21:02.141 "model_number": "SPDK bdev Controller", 00:21:02.141 "serial_number": "00000000000000000000", 00:21:02.141 "firmware_revision": "25.01", 00:21:02.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:02.141 "oacs": { 00:21:02.141 "security": 0, 00:21:02.141 "format": 0, 00:21:02.141 "firmware": 0, 00:21:02.141 "ns_manage": 0 00:21:02.141 }, 00:21:02.141 "multi_ctrlr": true, 00:21:02.141 "ana_reporting": false 00:21:02.141 }, 00:21:02.141 "vs": { 00:21:02.141 "nvme_version": "1.3" 00:21:02.141 }, 00:21:02.141 "ns_data": { 00:21:02.141 "id": 1, 00:21:02.141 "can_share": true 00:21:02.141 } 00:21:02.141 } 00:21:02.141 ], 00:21:02.141 "mp_policy": "active_passive" 00:21:02.141 } 00:21:02.141 } 00:21:02.141 ] 00:21:02.141 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.141 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:02.141 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.141 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.141 [2024-12-09 05:15:38.632362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:02.141 [2024-12-09 05:15:38.632417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2058e20 (9): Bad file descriptor 00:21:02.141 [2024-12-09 05:15:38.764070] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:02.141 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.141 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:02.141 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.141 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.141 [ 00:21:02.141 { 00:21:02.141 "name": "nvme0n1", 00:21:02.141 "aliases": [ 00:21:02.141 "efcf3d93-7ff7-4b2b-8e9d-a2b6d43d046f" 00:21:02.141 ], 00:21:02.141 "product_name": "NVMe disk", 00:21:02.141 "block_size": 512, 00:21:02.141 "num_blocks": 2097152, 00:21:02.141 "uuid": "efcf3d93-7ff7-4b2b-8e9d-a2b6d43d046f", 00:21:02.141 "numa_id": 1, 00:21:02.141 "assigned_rate_limits": { 00:21:02.141 "rw_ios_per_sec": 0, 00:21:02.141 "rw_mbytes_per_sec": 0, 00:21:02.141 "r_mbytes_per_sec": 0, 00:21:02.141 "w_mbytes_per_sec": 0 00:21:02.141 }, 00:21:02.141 "claimed": false, 00:21:02.141 "zoned": false, 00:21:02.141 "supported_io_types": { 00:21:02.141 "read": true, 00:21:02.141 "write": true, 00:21:02.141 "unmap": false, 00:21:02.141 "flush": true, 00:21:02.141 "reset": true, 00:21:02.141 "nvme_admin": true, 00:21:02.141 "nvme_io": true, 00:21:02.141 "nvme_io_md": false, 00:21:02.141 "write_zeroes": true, 00:21:02.141 "zcopy": false, 00:21:02.141 "get_zone_info": false, 00:21:02.141 "zone_management": false, 00:21:02.141 "zone_append": false, 00:21:02.141 "compare": true, 00:21:02.141 "compare_and_write": true, 00:21:02.141 "abort": true, 00:21:02.141 "seek_hole": false, 00:21:02.141 "seek_data": false, 00:21:02.141 "copy": true, 00:21:02.141 "nvme_iov_md": false 00:21:02.141 }, 00:21:02.141 "memory_domains": [ 00:21:02.141 { 00:21:02.141 "dma_device_id": "system", 00:21:02.141 "dma_device_type": 1 00:21:02.141 } 00:21:02.141 ], 00:21:02.141 "driver_specific": { 00:21:02.141 "nvme": [ 00:21:02.141 { 00:21:02.141 "trid": { 00:21:02.141 "trtype": "TCP", 00:21:02.141 "adrfam": "IPv4", 00:21:02.141 "traddr": "10.0.0.2", 00:21:02.141 "trsvcid": "4420", 00:21:02.141 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:02.141 }, 00:21:02.141 "ctrlr_data": { 00:21:02.141 "cntlid": 2, 00:21:02.141 "vendor_id": "0x8086", 00:21:02.141 "model_number": "SPDK bdev Controller", 00:21:02.141 "serial_number": "00000000000000000000", 00:21:02.141 "firmware_revision": "25.01", 00:21:02.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:02.141 "oacs": { 00:21:02.141 "security": 0, 00:21:02.141 "format": 0, 00:21:02.141 "firmware": 0, 00:21:02.141 "ns_manage": 0 00:21:02.141 }, 00:21:02.141 "multi_ctrlr": true, 00:21:02.141 "ana_reporting": false 00:21:02.141 }, 00:21:02.141 "vs": { 00:21:02.142 "nvme_version": "1.3" 00:21:02.142 }, 00:21:02.142 "ns_data": { 00:21:02.142 "id": 1, 00:21:02.142 "can_share": true 00:21:02.142 } 00:21:02.142 } 00:21:02.142 ], 00:21:02.142 "mp_policy": "active_passive" 00:21:02.142 } 00:21:02.142 } 00:21:02.142 ] 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.6erbdMyX0w 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.6erbdMyX0w 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.6erbdMyX0w 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.404 [2024-12-09 05:15:38.836979] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:02.404 [2024-12-09 05:15:38.837076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.404 [2024-12-09 05:15:38.857051] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.404 nvme0n1 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.404 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.404 [ 00:21:02.404 { 00:21:02.404 "name": "nvme0n1", 00:21:02.404 "aliases": [ 00:21:02.404 "efcf3d93-7ff7-4b2b-8e9d-a2b6d43d046f" 00:21:02.404 ], 00:21:02.404 "product_name": "NVMe disk", 00:21:02.404 "block_size": 512, 00:21:02.404 "num_blocks": 2097152, 00:21:02.404 "uuid": "efcf3d93-7ff7-4b2b-8e9d-a2b6d43d046f", 00:21:02.404 "numa_id": 1, 00:21:02.404 "assigned_rate_limits": { 00:21:02.404 "rw_ios_per_sec": 0, 00:21:02.404 "rw_mbytes_per_sec": 0, 00:21:02.404 "r_mbytes_per_sec": 0, 00:21:02.404 "w_mbytes_per_sec": 0 00:21:02.404 }, 00:21:02.404 "claimed": false, 00:21:02.404 "zoned": false, 00:21:02.404 "supported_io_types": { 00:21:02.404 "read": true, 00:21:02.404 "write": true, 00:21:02.404 "unmap": false, 00:21:02.404 "flush": true, 00:21:02.404 "reset": true, 00:21:02.404 "nvme_admin": true, 00:21:02.404 "nvme_io": true, 00:21:02.404 "nvme_io_md": false, 00:21:02.404 "write_zeroes": true, 00:21:02.404 "zcopy": false, 00:21:02.404 "get_zone_info": false, 00:21:02.404 "zone_management": false, 00:21:02.404 "zone_append": false, 00:21:02.404 "compare": true, 00:21:02.404 "compare_and_write": true, 00:21:02.404 "abort": true, 00:21:02.404 "seek_hole": false, 00:21:02.404 "seek_data": false, 00:21:02.404 "copy": true, 00:21:02.404 "nvme_iov_md": false 00:21:02.404 }, 00:21:02.404 "memory_domains": [ 00:21:02.404 { 00:21:02.404 "dma_device_id": "system", 00:21:02.404 "dma_device_type": 1 00:21:02.404 } 00:21:02.404 ], 00:21:02.404 "driver_specific": { 00:21:02.404 "nvme": [ 00:21:02.404 { 00:21:02.404 "trid": { 00:21:02.404 "trtype": "TCP", 00:21:02.404 "adrfam": "IPv4", 00:21:02.404 "traddr": "10.0.0.2", 00:21:02.404 "trsvcid": "4421", 00:21:02.404 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:02.404 }, 00:21:02.404 "ctrlr_data": { 00:21:02.404 "cntlid": 3, 00:21:02.404 "vendor_id": "0x8086", 00:21:02.404 "model_number": "SPDK bdev Controller", 00:21:02.404 "serial_number": "00000000000000000000", 00:21:02.404 "firmware_revision": "25.01", 00:21:02.404 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:02.404 "oacs": { 00:21:02.404 "security": 0, 00:21:02.404 "format": 0, 00:21:02.404 "firmware": 0, 00:21:02.404 "ns_manage": 0 00:21:02.404 }, 00:21:02.404 "multi_ctrlr": true, 00:21:02.404 "ana_reporting": false 00:21:02.404 }, 00:21:02.404 "vs": { 00:21:02.404 "nvme_version": "1.3" 00:21:02.404 }, 00:21:02.404 "ns_data": { 00:21:02.404 "id": 1, 00:21:02.404 "can_share": true 00:21:02.404 } 00:21:02.404 } 00:21:02.404 ], 00:21:02.405 "mp_policy": "active_passive" 00:21:02.405 } 00:21:02.405 } 00:21:02.405 ] 00:21:02.405 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.405 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.405 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.405 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.405 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.405 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.6erbdMyX0w 00:21:02.405 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:02.405 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:02.405 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:02.405 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:02.405 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:02.405 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:02.405 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:02.405 05:15:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:02.405 rmmod nvme_tcp 00:21:02.405 rmmod nvme_fabrics 00:21:02.405 rmmod nvme_keyring 00:21:02.405 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:02.405 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:02.405 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:02.405 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3656423 ']' 00:21:02.405 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3656423 00:21:02.405 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3656423 ']' 00:21:02.405 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3656423 00:21:02.405 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:02.405 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.405 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3656423 00:21:02.662 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:02.662 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:02.662 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3656423' 00:21:02.662 killing process with pid 3656423 00:21:02.662 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3656423 00:21:02.662 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3656423 00:21:02.662 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:02.662 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:02.662 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:02.662 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:02.662 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:02.662 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:02.662 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:02.662 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:02.662 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:02.662 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.662 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.662 05:15:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.189 05:15:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:05.189 00:21:05.189 real 0m8.832s 00:21:05.189 user 0m2.961s 00:21:05.189 sys 0m4.288s 00:21:05.189 05:15:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.189 05:15:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:05.189 ************************************ 00:21:05.189 END TEST nvmf_async_init 00:21:05.189 ************************************ 00:21:05.189 05:15:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:05.189 05:15:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:05.189 05:15:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.189 05:15:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.189 ************************************ 00:21:05.189 START TEST dma 00:21:05.189 ************************************ 00:21:05.189 05:15:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:05.189 * Looking for test storage... 00:21:05.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:05.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.190 --rc genhtml_branch_coverage=1 00:21:05.190 --rc genhtml_function_coverage=1 00:21:05.190 --rc genhtml_legend=1 00:21:05.190 --rc geninfo_all_blocks=1 00:21:05.190 --rc geninfo_unexecuted_blocks=1 00:21:05.190 00:21:05.190 ' 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:05.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.190 --rc genhtml_branch_coverage=1 00:21:05.190 --rc genhtml_function_coverage=1 00:21:05.190 --rc genhtml_legend=1 00:21:05.190 --rc geninfo_all_blocks=1 00:21:05.190 --rc geninfo_unexecuted_blocks=1 00:21:05.190 00:21:05.190 ' 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:05.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.190 --rc genhtml_branch_coverage=1 00:21:05.190 --rc genhtml_function_coverage=1 00:21:05.190 --rc genhtml_legend=1 00:21:05.190 --rc geninfo_all_blocks=1 00:21:05.190 --rc geninfo_unexecuted_blocks=1 00:21:05.190 00:21:05.190 ' 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:05.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.190 --rc genhtml_branch_coverage=1 00:21:05.190 --rc genhtml_function_coverage=1 00:21:05.190 --rc genhtml_legend=1 00:21:05.190 --rc geninfo_all_blocks=1 00:21:05.190 --rc geninfo_unexecuted_blocks=1 00:21:05.190 00:21:05.190 ' 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:05.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:05.190 00:21:05.190 real 0m0.198s 00:21:05.190 user 0m0.125s 00:21:05.190 sys 0m0.084s 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:05.190 ************************************ 00:21:05.190 END TEST dma 00:21:05.190 ************************************ 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.190 ************************************ 00:21:05.190 START TEST nvmf_identify 00:21:05.190 ************************************ 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:05.190 * Looking for test storage... 00:21:05.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:05.190 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.191 --rc genhtml_branch_coverage=1 00:21:05.191 --rc genhtml_function_coverage=1 00:21:05.191 --rc genhtml_legend=1 00:21:05.191 --rc geninfo_all_blocks=1 00:21:05.191 --rc geninfo_unexecuted_blocks=1 00:21:05.191 00:21:05.191 ' 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.191 --rc genhtml_branch_coverage=1 00:21:05.191 --rc genhtml_function_coverage=1 00:21:05.191 --rc genhtml_legend=1 00:21:05.191 --rc geninfo_all_blocks=1 00:21:05.191 --rc geninfo_unexecuted_blocks=1 00:21:05.191 00:21:05.191 ' 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.191 --rc genhtml_branch_coverage=1 00:21:05.191 --rc genhtml_function_coverage=1 00:21:05.191 --rc genhtml_legend=1 00:21:05.191 --rc geninfo_all_blocks=1 00:21:05.191 --rc geninfo_unexecuted_blocks=1 00:21:05.191 00:21:05.191 ' 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.191 --rc genhtml_branch_coverage=1 00:21:05.191 --rc genhtml_function_coverage=1 00:21:05.191 --rc genhtml_legend=1 00:21:05.191 --rc geninfo_all_blocks=1 00:21:05.191 --rc geninfo_unexecuted_blocks=1 00:21:05.191 00:21:05.191 ' 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:05.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:05.191 05:15:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:10.457 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:10.457 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:10.457 Found net devices under 0000:86:00.0: cvl_0_0 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.457 05:15:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.457 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.457 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.457 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.457 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.457 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.457 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.457 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:10.457 Found net devices under 0000:86:00.1: cvl_0_1 00:21:10.457 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.457 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:10.457 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:21:10.457 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:10.457 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:10.457 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:10.457 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:10.457 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.457 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.457 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.457 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:10.458 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.458 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.458 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:10.458 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:10.458 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.458 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.458 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:10.458 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:10.458 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.458 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:10.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:21:10.715 00:21:10.715 --- 10.0.0.2 ping statistics --- 00:21:10.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.715 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:21:10.715 00:21:10.715 --- 10.0.0.1 ping statistics --- 00:21:10.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.715 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3660153 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3660153 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3660153 ']' 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.715 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:10.715 [2024-12-09 05:15:47.308050] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:21:10.715 [2024-12-09 05:15:47.308093] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.972 [2024-12-09 05:15:47.376624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:10.972 [2024-12-09 05:15:47.420791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.972 [2024-12-09 05:15:47.420826] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.972 [2024-12-09 05:15:47.420833] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.972 [2024-12-09 05:15:47.420840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.972 [2024-12-09 05:15:47.420845] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.972 [2024-12-09 05:15:47.422456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.972 [2024-12-09 05:15:47.422549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.972 [2024-12-09 05:15:47.422639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:10.972 [2024-12-09 05:15:47.422640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.972 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.972 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:21:10.972 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:10.972 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.972 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:10.972 [2024-12-09 05:15:47.524547] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.972 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.972 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:10.972 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:10.972 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:10.972 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:10.972 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.972 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:10.972 Malloc0 00:21:10.972 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.972 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:10.972 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.972 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:11.230 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.230 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:11.230 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.230 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:11.230 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.230 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.230 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.230 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:11.230 [2024-12-09 05:15:47.628715] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.230 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.230 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:11.230 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.230 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:11.230 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.230 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:11.230 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.230 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:11.230 [ 00:21:11.230 { 00:21:11.230 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:11.230 "subtype": "Discovery", 00:21:11.230 "listen_addresses": [ 00:21:11.230 { 00:21:11.230 "trtype": "TCP", 00:21:11.230 "adrfam": "IPv4", 00:21:11.230 "traddr": "10.0.0.2", 00:21:11.230 "trsvcid": "4420" 00:21:11.230 } 00:21:11.230 ], 00:21:11.230 "allow_any_host": true, 00:21:11.230 "hosts": [] 00:21:11.230 }, 00:21:11.230 { 00:21:11.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:11.231 "subtype": "NVMe", 00:21:11.231 "listen_addresses": [ 00:21:11.231 { 00:21:11.231 "trtype": "TCP", 00:21:11.231 "adrfam": "IPv4", 00:21:11.231 "traddr": "10.0.0.2", 00:21:11.231 "trsvcid": "4420" 00:21:11.231 } 00:21:11.231 ], 00:21:11.231 "allow_any_host": true, 00:21:11.231 "hosts": [], 00:21:11.231 "serial_number": "SPDK00000000000001", 00:21:11.231 "model_number": "SPDK bdev Controller", 00:21:11.231 "max_namespaces": 32, 00:21:11.231 "min_cntlid": 1, 00:21:11.231 "max_cntlid": 65519, 00:21:11.231 "namespaces": [ 00:21:11.231 { 00:21:11.231 "nsid": 1, 00:21:11.231 "bdev_name": "Malloc0", 00:21:11.231 "name": "Malloc0", 00:21:11.231 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:11.231 "eui64": "ABCDEF0123456789", 00:21:11.231 "uuid": "1e8e1058-1266-496e-8a11-4825c298130f" 00:21:11.231 } 00:21:11.231 ] 00:21:11.231 } 00:21:11.231 ] 00:21:11.231 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.231 05:15:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:11.231 [2024-12-09 05:15:47.680627] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:21:11.231 [2024-12-09 05:15:47.680664] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660180 ] 00:21:11.231 [2024-12-09 05:15:47.725883] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:11.231 [2024-12-09 05:15:47.725933] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:11.231 [2024-12-09 05:15:47.725939] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:11.231 [2024-12-09 05:15:47.725956] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:11.231 [2024-12-09 05:15:47.725965] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:11.231 [2024-12-09 05:15:47.730324] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:11.231 [2024-12-09 05:15:47.730357] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2230690 0 00:21:11.231 [2024-12-09 05:15:47.738012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:11.231 [2024-12-09 05:15:47.738028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:11.231 [2024-12-09 05:15:47.738035] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:11.231 [2024-12-09 05:15:47.738039] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:11.231 [2024-12-09 05:15:47.738074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.231 [2024-12-09 05:15:47.738080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.231 [2024-12-09 05:15:47.738084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2230690) 00:21:11.231 [2024-12-09 05:15:47.738097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:11.231 [2024-12-09 05:15:47.738114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292100, cid 0, qid 0 00:21:11.231 [2024-12-09 05:15:47.746009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.231 [2024-12-09 05:15:47.746020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.231 [2024-12-09 05:15:47.746024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.231 [2024-12-09 05:15:47.746028] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292100) on tqpair=0x2230690 00:21:11.231 [2024-12-09 05:15:47.746040] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:11.231 [2024-12-09 05:15:47.746047] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:11.231 [2024-12-09 05:15:47.746053] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:11.231 [2024-12-09 05:15:47.746068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.231 [2024-12-09 05:15:47.746072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.231 [2024-12-09 05:15:47.746075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2230690) 00:21:11.231 [2024-12-09 05:15:47.746083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.231 [2024-12-09 05:15:47.746096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292100, cid 0, qid 0 00:21:11.231 [2024-12-09 05:15:47.746268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.231 [2024-12-09 05:15:47.746274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.231 [2024-12-09 05:15:47.746278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.231 [2024-12-09 05:15:47.746281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292100) on tqpair=0x2230690 00:21:11.231 [2024-12-09 05:15:47.746289] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:11.231 [2024-12-09 05:15:47.746296] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:11.231 [2024-12-09 05:15:47.746302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.231 [2024-12-09 05:15:47.746306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.231 [2024-12-09 05:15:47.746309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2230690) 00:21:11.231 [2024-12-09 05:15:47.746315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.231 [2024-12-09 05:15:47.746325] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292100, cid 0, qid 0 00:21:11.231 [2024-12-09 05:15:47.746398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.231 [2024-12-09 05:15:47.746404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.231 [2024-12-09 05:15:47.746407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.231 [2024-12-09 05:15:47.746410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292100) on tqpair=0x2230690 00:21:11.231 [2024-12-09 05:15:47.746416] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:11.231 [2024-12-09 05:15:47.746426] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:11.231 [2024-12-09 05:15:47.746432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.231 [2024-12-09 05:15:47.746436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.231 [2024-12-09 05:15:47.746439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2230690) 00:21:11.231 [2024-12-09 05:15:47.746445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.231 [2024-12-09 05:15:47.746455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292100, cid 0, qid 0 00:21:11.231 [2024-12-09 05:15:47.746520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.231 [2024-12-09 05:15:47.746526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.231 [2024-12-09 05:15:47.746529] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.231 [2024-12-09 05:15:47.746533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292100) on tqpair=0x2230690 00:21:11.231 [2024-12-09 05:15:47.746537] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:11.231 [2024-12-09 05:15:47.746546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.231 [2024-12-09 05:15:47.746550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.231 [2024-12-09 05:15:47.746553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2230690) 00:21:11.231 [2024-12-09 05:15:47.746559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.231 [2024-12-09 05:15:47.746568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292100, cid 0, qid 0 00:21:11.231 [2024-12-09 05:15:47.746636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.231 [2024-12-09 05:15:47.746643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.231 [2024-12-09 05:15:47.746647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.231 [2024-12-09 05:15:47.746652] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292100) on tqpair=0x2230690 00:21:11.231 [2024-12-09 05:15:47.746657] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:11.231 [2024-12-09 05:15:47.746662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:11.231 [2024-12-09 05:15:47.746668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:11.231 [2024-12-09 05:15:47.746774] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:11.231 [2024-12-09 05:15:47.746778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:11.231 [2024-12-09 05:15:47.746787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.231 [2024-12-09 05:15:47.746790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.231 [2024-12-09 05:15:47.746793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2230690) 00:21:11.231 [2024-12-09 05:15:47.746799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.231 [2024-12-09 05:15:47.746809] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292100, cid 0, qid 0 00:21:11.231 [2024-12-09 05:15:47.746880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.231 [2024-12-09 05:15:47.746886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.231 [2024-12-09 05:15:47.746889] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.231 [2024-12-09 05:15:47.746894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292100) on tqpair=0x2230690 00:21:11.231 [2024-12-09 05:15:47.746899] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:11.231 [2024-12-09 05:15:47.746907] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.231 [2024-12-09 05:15:47.746910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.231 [2024-12-09 05:15:47.746914] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2230690) 00:21:11.231 [2024-12-09 05:15:47.746919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.232 [2024-12-09 05:15:47.746929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292100, cid 0, qid 0 00:21:11.232 [2024-12-09 05:15:47.746993] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.232 [2024-12-09 05:15:47.747006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.232 [2024-12-09 05:15:47.747009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.747013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292100) on tqpair=0x2230690 00:21:11.232 [2024-12-09 05:15:47.747017] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:11.232 [2024-12-09 05:15:47.747022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:11.232 [2024-12-09 05:15:47.747029] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:11.232 [2024-12-09 05:15:47.747039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:11.232 [2024-12-09 05:15:47.747047] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.747051] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2230690) 00:21:11.232 [2024-12-09 05:15:47.747057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.232 [2024-12-09 05:15:47.747067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292100, cid 0, qid 0 00:21:11.232 [2024-12-09 05:15:47.747176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:11.232 [2024-12-09 05:15:47.747182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:11.232 [2024-12-09 05:15:47.747185] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.747189] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2230690): datao=0, datal=4096, cccid=0 00:21:11.232 [2024-12-09 05:15:47.747193] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2292100) on tqpair(0x2230690): expected_datao=0, payload_size=4096 00:21:11.232 [2024-12-09 05:15:47.747198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.747205] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.747209] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789130] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.232 [2024-12-09 05:15:47.789143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.232 [2024-12-09 05:15:47.789146] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789150] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292100) on tqpair=0x2230690 00:21:11.232 [2024-12-09 05:15:47.789158] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:11.232 [2024-12-09 05:15:47.789163] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:11.232 [2024-12-09 05:15:47.789171] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:11.232 [2024-12-09 05:15:47.789177] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:11.232 [2024-12-09 05:15:47.789181] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:11.232 [2024-12-09 05:15:47.789186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:11.232 [2024-12-09 05:15:47.789195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:11.232 [2024-12-09 05:15:47.789202] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789206] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2230690) 00:21:11.232 [2024-12-09 05:15:47.789216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:11.232 [2024-12-09 05:15:47.789229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292100, cid 0, qid 0 00:21:11.232 [2024-12-09 05:15:47.789299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.232 [2024-12-09 05:15:47.789306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.232 [2024-12-09 05:15:47.789309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292100) on tqpair=0x2230690 00:21:11.232 [2024-12-09 05:15:47.789320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2230690) 00:21:11.232 [2024-12-09 05:15:47.789332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.232 [2024-12-09 05:15:47.789338] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2230690) 00:21:11.232 [2024-12-09 05:15:47.789349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.232 [2024-12-09 05:15:47.789354] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789358] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789361] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2230690) 00:21:11.232 [2024-12-09 05:15:47.789366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.232 [2024-12-09 05:15:47.789371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2230690) 00:21:11.232 [2024-12-09 05:15:47.789383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.232 [2024-12-09 05:15:47.789387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:11.232 [2024-12-09 05:15:47.789399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:11.232 [2024-12-09 05:15:47.789407] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2230690) 00:21:11.232 [2024-12-09 05:15:47.789417] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.232 [2024-12-09 05:15:47.789429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292100, cid 0, qid 0 00:21:11.232 [2024-12-09 05:15:47.789434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292280, cid 1, qid 0 00:21:11.232 [2024-12-09 05:15:47.789438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292400, cid 2, qid 0 00:21:11.232 [2024-12-09 05:15:47.789442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292580, cid 3, qid 0 00:21:11.232 [2024-12-09 05:15:47.789446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292700, cid 4, qid 0 00:21:11.232 [2024-12-09 05:15:47.789550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.232 [2024-12-09 05:15:47.789556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.232 [2024-12-09 05:15:47.789559] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789563] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292700) on tqpair=0x2230690 00:21:11.232 [2024-12-09 05:15:47.789568] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:11.232 [2024-12-09 05:15:47.789572] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:11.232 [2024-12-09 05:15:47.789583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789586] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2230690) 00:21:11.232 [2024-12-09 05:15:47.789592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.232 [2024-12-09 05:15:47.789602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292700, cid 4, qid 0 00:21:11.232 [2024-12-09 05:15:47.789676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:11.232 [2024-12-09 05:15:47.789682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:11.232 [2024-12-09 05:15:47.789685] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789688] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2230690): datao=0, datal=4096, cccid=4 00:21:11.232 [2024-12-09 05:15:47.789692] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2292700) on tqpair(0x2230690): expected_datao=0, payload_size=4096 00:21:11.232 [2024-12-09 05:15:47.789696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789730] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789734] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789779] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.232 [2024-12-09 05:15:47.789785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.232 [2024-12-09 05:15:47.789788] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789791] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292700) on tqpair=0x2230690 00:21:11.232 [2024-12-09 05:15:47.789804] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:11.232 [2024-12-09 05:15:47.789825] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789830] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2230690) 00:21:11.232 [2024-12-09 05:15:47.789835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.232 [2024-12-09 05:15:47.789843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.232 [2024-12-09 05:15:47.789850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2230690) 00:21:11.232 [2024-12-09 05:15:47.789855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.232 [2024-12-09 05:15:47.789869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292700, cid 4, qid 0 00:21:11.233 [2024-12-09 05:15:47.789874] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292880, cid 5, qid 0 00:21:11.233 [2024-12-09 05:15:47.789983] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:11.233 [2024-12-09 05:15:47.789989] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:11.233 [2024-12-09 05:15:47.789992] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:11.233 [2024-12-09 05:15:47.789995] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2230690): datao=0, datal=1024, cccid=4 00:21:11.233 [2024-12-09 05:15:47.794005] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2292700) on tqpair(0x2230690): expected_datao=0, payload_size=1024 00:21:11.233 [2024-12-09 05:15:47.794011] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.233 [2024-12-09 05:15:47.794016] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:11.233 [2024-12-09 05:15:47.794020] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:11.233 [2024-12-09 05:15:47.794025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.233 [2024-12-09 05:15:47.794030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.233 [2024-12-09 05:15:47.794033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.233 [2024-12-09 05:15:47.794037] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292880) on tqpair=0x2230690 00:21:11.233 [2024-12-09 05:15:47.834006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.233 [2024-12-09 05:15:47.834015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.233 [2024-12-09 05:15:47.834019] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.233 [2024-12-09 05:15:47.834022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292700) on tqpair=0x2230690 00:21:11.233 [2024-12-09 05:15:47.834034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.233 [2024-12-09 05:15:47.834037] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2230690) 00:21:11.233 [2024-12-09 05:15:47.834044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.233 [2024-12-09 05:15:47.834061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292700, cid 4, qid 0 00:21:11.233 [2024-12-09 05:15:47.834196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:11.233 [2024-12-09 05:15:47.834201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:11.233 [2024-12-09 05:15:47.834204] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:11.233 [2024-12-09 05:15:47.834207] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2230690): datao=0, datal=3072, cccid=4 00:21:11.233 [2024-12-09 05:15:47.834212] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2292700) on tqpair(0x2230690): expected_datao=0, payload_size=3072 00:21:11.233 [2024-12-09 05:15:47.834216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.233 [2024-12-09 05:15:47.834228] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:11.233 [2024-12-09 05:15:47.834232] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:11.494 [2024-12-09 05:15:47.875136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.495 [2024-12-09 05:15:47.875153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.495 [2024-12-09 05:15:47.875156] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.495 [2024-12-09 05:15:47.875164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292700) on tqpair=0x2230690 00:21:11.495 [2024-12-09 05:15:47.875174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.495 [2024-12-09 05:15:47.875178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2230690) 00:21:11.495 [2024-12-09 05:15:47.875185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.495 [2024-12-09 05:15:47.875201] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292700, cid 4, qid 0 00:21:11.495 [2024-12-09 05:15:47.875273] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:11.495 [2024-12-09 05:15:47.875279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:11.495 [2024-12-09 05:15:47.875282] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:11.495 [2024-12-09 05:15:47.875285] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2230690): datao=0, datal=8, cccid=4 00:21:11.495 [2024-12-09 05:15:47.875289] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2292700) on tqpair(0x2230690): expected_datao=0, payload_size=8 00:21:11.495 [2024-12-09 05:15:47.875293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.495 [2024-12-09 05:15:47.875299] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:11.495 [2024-12-09 05:15:47.875302] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:11.495 [2024-12-09 05:15:47.917169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.495 [2024-12-09 05:15:47.917182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.495 [2024-12-09 05:15:47.917186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.495 [2024-12-09 05:15:47.917189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292700) on tqpair=0x2230690 00:21:11.495 ===================================================== 00:21:11.495 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:11.495 ===================================================== 00:21:11.495 Controller Capabilities/Features 00:21:11.495 ================================ 00:21:11.495 Vendor ID: 0000 00:21:11.495 Subsystem Vendor ID: 0000 00:21:11.495 Serial Number: .................... 00:21:11.495 Model Number: ........................................ 00:21:11.495 Firmware Version: 25.01 00:21:11.495 Recommended Arb Burst: 0 00:21:11.495 IEEE OUI Identifier: 00 00 00 00:21:11.495 Multi-path I/O 00:21:11.495 May have multiple subsystem ports: No 00:21:11.495 May have multiple controllers: No 00:21:11.495 Associated with SR-IOV VF: No 00:21:11.495 Max Data Transfer Size: 131072 00:21:11.495 Max Number of Namespaces: 0 00:21:11.495 Max Number of I/O Queues: 1024 00:21:11.495 NVMe Specification Version (VS): 1.3 00:21:11.495 NVMe Specification Version (Identify): 1.3 00:21:11.495 Maximum Queue Entries: 128 00:21:11.495 Contiguous Queues Required: Yes 00:21:11.495 Arbitration Mechanisms Supported 00:21:11.495 Weighted Round Robin: Not Supported 00:21:11.495 Vendor Specific: Not Supported 00:21:11.495 Reset Timeout: 15000 ms 00:21:11.495 Doorbell Stride: 4 bytes 00:21:11.495 NVM Subsystem Reset: Not Supported 00:21:11.495 Command Sets Supported 00:21:11.495 NVM Command Set: Supported 00:21:11.495 Boot Partition: Not Supported 00:21:11.495 Memory Page Size Minimum: 4096 bytes 00:21:11.495 Memory Page Size Maximum: 4096 bytes 00:21:11.495 Persistent Memory Region: Not Supported 00:21:11.495 Optional Asynchronous Events Supported 00:21:11.495 Namespace Attribute Notices: Not Supported 00:21:11.495 Firmware Activation Notices: Not Supported 00:21:11.495 ANA Change Notices: Not Supported 00:21:11.495 PLE Aggregate Log Change Notices: Not Supported 00:21:11.495 LBA Status Info Alert Notices: Not Supported 00:21:11.495 EGE Aggregate Log Change Notices: Not Supported 00:21:11.495 Normal NVM Subsystem Shutdown event: Not Supported 00:21:11.495 Zone Descriptor Change Notices: Not Supported 00:21:11.495 Discovery Log Change Notices: Supported 00:21:11.495 Controller Attributes 00:21:11.495 128-bit Host Identifier: Not Supported 00:21:11.495 Non-Operational Permissive Mode: Not Supported 00:21:11.495 NVM Sets: Not Supported 00:21:11.495 Read Recovery Levels: Not Supported 00:21:11.495 Endurance Groups: Not Supported 00:21:11.495 Predictable Latency Mode: Not Supported 00:21:11.495 Traffic Based Keep ALive: Not Supported 00:21:11.495 Namespace Granularity: Not Supported 00:21:11.495 SQ Associations: Not Supported 00:21:11.495 UUID List: Not Supported 00:21:11.495 Multi-Domain Subsystem: Not Supported 00:21:11.495 Fixed Capacity Management: Not Supported 00:21:11.495 Variable Capacity Management: Not Supported 00:21:11.495 Delete Endurance Group: Not Supported 00:21:11.495 Delete NVM Set: Not Supported 00:21:11.495 Extended LBA Formats Supported: Not Supported 00:21:11.495 Flexible Data Placement Supported: Not Supported 00:21:11.495 00:21:11.495 Controller Memory Buffer Support 00:21:11.495 ================================ 00:21:11.495 Supported: No 00:21:11.495 00:21:11.495 Persistent Memory Region Support 00:21:11.495 ================================ 00:21:11.495 Supported: No 00:21:11.495 00:21:11.495 Admin Command Set Attributes 00:21:11.495 ============================ 00:21:11.495 Security Send/Receive: Not Supported 00:21:11.495 Format NVM: Not Supported 00:21:11.495 Firmware Activate/Download: Not Supported 00:21:11.495 Namespace Management: Not Supported 00:21:11.495 Device Self-Test: Not Supported 00:21:11.495 Directives: Not Supported 00:21:11.495 NVMe-MI: Not Supported 00:21:11.495 Virtualization Management: Not Supported 00:21:11.495 Doorbell Buffer Config: Not Supported 00:21:11.495 Get LBA Status Capability: Not Supported 00:21:11.495 Command & Feature Lockdown Capability: Not Supported 00:21:11.495 Abort Command Limit: 1 00:21:11.495 Async Event Request Limit: 4 00:21:11.495 Number of Firmware Slots: N/A 00:21:11.495 Firmware Slot 1 Read-Only: N/A 00:21:11.495 Firmware Activation Without Reset: N/A 00:21:11.495 Multiple Update Detection Support: N/A 00:21:11.495 Firmware Update Granularity: No Information Provided 00:21:11.495 Per-Namespace SMART Log: No 00:21:11.495 Asymmetric Namespace Access Log Page: Not Supported 00:21:11.495 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:11.495 Command Effects Log Page: Not Supported 00:21:11.495 Get Log Page Extended Data: Supported 00:21:11.495 Telemetry Log Pages: Not Supported 00:21:11.495 Persistent Event Log Pages: Not Supported 00:21:11.495 Supported Log Pages Log Page: May Support 00:21:11.495 Commands Supported & Effects Log Page: Not Supported 00:21:11.495 Feature Identifiers & Effects Log Page:May Support 00:21:11.495 NVMe-MI Commands & Effects Log Page: May Support 00:21:11.495 Data Area 4 for Telemetry Log: Not Supported 00:21:11.495 Error Log Page Entries Supported: 128 00:21:11.495 Keep Alive: Not Supported 00:21:11.495 00:21:11.495 NVM Command Set Attributes 00:21:11.495 ========================== 00:21:11.495 Submission Queue Entry Size 00:21:11.495 Max: 1 00:21:11.495 Min: 1 00:21:11.495 Completion Queue Entry Size 00:21:11.495 Max: 1 00:21:11.495 Min: 1 00:21:11.495 Number of Namespaces: 0 00:21:11.495 Compare Command: Not Supported 00:21:11.495 Write Uncorrectable Command: Not Supported 00:21:11.495 Dataset Management Command: Not Supported 00:21:11.495 Write Zeroes Command: Not Supported 00:21:11.495 Set Features Save Field: Not Supported 00:21:11.495 Reservations: Not Supported 00:21:11.495 Timestamp: Not Supported 00:21:11.495 Copy: Not Supported 00:21:11.495 Volatile Write Cache: Not Present 00:21:11.495 Atomic Write Unit (Normal): 1 00:21:11.495 Atomic Write Unit (PFail): 1 00:21:11.495 Atomic Compare & Write Unit: 1 00:21:11.495 Fused Compare & Write: Supported 00:21:11.495 Scatter-Gather List 00:21:11.495 SGL Command Set: Supported 00:21:11.495 SGL Keyed: Supported 00:21:11.495 SGL Bit Bucket Descriptor: Not Supported 00:21:11.495 SGL Metadata Pointer: Not Supported 00:21:11.495 Oversized SGL: Not Supported 00:21:11.495 SGL Metadata Address: Not Supported 00:21:11.495 SGL Offset: Supported 00:21:11.495 Transport SGL Data Block: Not Supported 00:21:11.495 Replay Protected Memory Block: Not Supported 00:21:11.495 00:21:11.495 Firmware Slot Information 00:21:11.495 ========================= 00:21:11.495 Active slot: 0 00:21:11.495 00:21:11.495 00:21:11.495 Error Log 00:21:11.495 ========= 00:21:11.495 00:21:11.495 Active Namespaces 00:21:11.495 ================= 00:21:11.495 Discovery Log Page 00:21:11.495 ================== 00:21:11.495 Generation Counter: 2 00:21:11.495 Number of Records: 2 00:21:11.495 Record Format: 0 00:21:11.495 00:21:11.495 Discovery Log Entry 0 00:21:11.495 ---------------------- 00:21:11.495 Transport Type: 3 (TCP) 00:21:11.495 Address Family: 1 (IPv4) 00:21:11.496 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:11.496 Entry Flags: 00:21:11.496 Duplicate Returned Information: 1 00:21:11.496 Explicit Persistent Connection Support for Discovery: 1 00:21:11.496 Transport Requirements: 00:21:11.496 Secure Channel: Not Required 00:21:11.496 Port ID: 0 (0x0000) 00:21:11.496 Controller ID: 65535 (0xffff) 00:21:11.496 Admin Max SQ Size: 128 00:21:11.496 Transport Service Identifier: 4420 00:21:11.496 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:11.496 Transport Address: 10.0.0.2 00:21:11.496 Discovery Log Entry 1 00:21:11.496 ---------------------- 00:21:11.496 Transport Type: 3 (TCP) 00:21:11.496 Address Family: 1 (IPv4) 00:21:11.496 Subsystem Type: 2 (NVM Subsystem) 00:21:11.496 Entry Flags: 00:21:11.496 Duplicate Returned Information: 0 00:21:11.496 Explicit Persistent Connection Support for Discovery: 0 00:21:11.496 Transport Requirements: 00:21:11.496 Secure Channel: Not Required 00:21:11.496 Port ID: 0 (0x0000) 00:21:11.496 Controller ID: 65535 (0xffff) 00:21:11.496 Admin Max SQ Size: 128 00:21:11.496 Transport Service Identifier: 4420 00:21:11.496 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:11.496 Transport Address: 10.0.0.2 [2024-12-09 05:15:47.917274] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:11.496 [2024-12-09 05:15:47.917287] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292100) on tqpair=0x2230690 00:21:11.496 [2024-12-09 05:15:47.917293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.496 [2024-12-09 05:15:47.917299] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292280) on tqpair=0x2230690 00:21:11.496 [2024-12-09 05:15:47.917303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.496 [2024-12-09 05:15:47.917307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292400) on tqpair=0x2230690 00:21:11.496 [2024-12-09 05:15:47.917311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.496 [2024-12-09 05:15:47.917316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292580) on tqpair=0x2230690 00:21:11.496 [2024-12-09 05:15:47.917320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.496 [2024-12-09 05:15:47.917328] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.496 [2024-12-09 05:15:47.917332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.496 [2024-12-09 05:15:47.917335] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2230690) 00:21:11.496 [2024-12-09 05:15:47.917342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.496 [2024-12-09 05:15:47.917357] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292580, cid 3, qid 0 00:21:11.496 [2024-12-09 05:15:47.917423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.496 [2024-12-09 05:15:47.917429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.496 [2024-12-09 05:15:47.917432] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.496 [2024-12-09 05:15:47.917438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292580) on tqpair=0x2230690 00:21:11.496 [2024-12-09 05:15:47.917444] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.496 [2024-12-09 05:15:47.917448] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.496 [2024-12-09 05:15:47.917451] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2230690) 00:21:11.496 [2024-12-09 05:15:47.917457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.496 [2024-12-09 05:15:47.917469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292580, cid 3, qid 0 00:21:11.496 [2024-12-09 05:15:47.917547] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.496 [2024-12-09 05:15:47.917553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.496 [2024-12-09 05:15:47.917556] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.496 [2024-12-09 05:15:47.917560] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292580) on tqpair=0x2230690 00:21:11.496 [2024-12-09 05:15:47.917565] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:11.496 [2024-12-09 05:15:47.917569] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:11.496 [2024-12-09 05:15:47.917577] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.496 [2024-12-09 05:15:47.917581] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.496 [2024-12-09 05:15:47.917584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2230690) 00:21:11.496 [2024-12-09 05:15:47.917590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.496 [2024-12-09 05:15:47.917599] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292580, cid 3, qid 0 00:21:11.496 [2024-12-09 05:15:47.917662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.496 [2024-12-09 05:15:47.917668] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.496 [2024-12-09 05:15:47.917671] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.496 [2024-12-09 05:15:47.917674] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292580) on tqpair=0x2230690 00:21:11.496 [2024-12-09 05:15:47.917683] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.496 [2024-12-09 05:15:47.917687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.496 [2024-12-09 05:15:47.917690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2230690) 00:21:11.496 [2024-12-09 05:15:47.917696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.496 [2024-12-09 05:15:47.917705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292580, cid 3, qid 0 00:21:11.496 [2024-12-09 05:15:47.917770] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.496 [2024-12-09 05:15:47.917775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.496 [2024-12-09 05:15:47.917778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.496 [2024-12-09 05:15:47.917782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292580) on tqpair=0x2230690 00:21:11.496 [2024-12-09 05:15:47.917790] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.496 [2024-12-09 05:15:47.917794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.496 [2024-12-09 05:15:47.917797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2230690) 00:21:11.496 [2024-12-09 05:15:47.917803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.496 [2024-12-09 05:15:47.917812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292580, cid 3, qid 0 00:21:11.496 [2024-12-09 05:15:47.917877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.496 [2024-12-09 05:15:47.917884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.496 [2024-12-09 05:15:47.917888] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.496 [2024-12-09 05:15:47.917891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292580) on tqpair=0x2230690 00:21:11.496 [2024-12-09 05:15:47.917899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.496 [2024-12-09 05:15:47.917903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.496 [2024-12-09 05:15:47.917906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2230690) 00:21:11.496 [2024-12-09 05:15:47.917911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.496 [2024-12-09 05:15:47.917921] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292580, cid 3, qid 0 00:21:11.496 [2024-12-09 05:15:47.917991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.496 [2024-12-09 05:15:47.922003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.496 [2024-12-09 05:15:47.922008] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.496 [2024-12-09 05:15:47.922012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292580) on tqpair=0x2230690 00:21:11.496 [2024-12-09 05:15:47.922022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.496 [2024-12-09 05:15:47.922026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.496 [2024-12-09 05:15:47.922029] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2230690) 00:21:11.496 [2024-12-09 05:15:47.922035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.496 [2024-12-09 05:15:47.922046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292580, cid 3, qid 0 00:21:11.496 [2024-12-09 05:15:47.922135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.496 [2024-12-09 05:15:47.922141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.496 [2024-12-09 05:15:47.922144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.496 [2024-12-09 05:15:47.922147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2292580) on tqpair=0x2230690 00:21:11.496 [2024-12-09 05:15:47.922154] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:21:11.496 00:21:11.496 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:11.496 [2024-12-09 05:15:48.032947] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:21:11.496 [2024-12-09 05:15:48.032981] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660321 ] 00:21:11.496 [2024-12-09 05:15:48.078582] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:11.497 [2024-12-09 05:15:48.078623] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:11.497 [2024-12-09 05:15:48.078628] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:11.497 [2024-12-09 05:15:48.078642] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:11.497 [2024-12-09 05:15:48.078650] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:11.497 [2024-12-09 05:15:48.082202] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:11.497 [2024-12-09 05:15:48.082232] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2282690 0 00:21:11.497 [2024-12-09 05:15:48.089007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:11.497 [2024-12-09 05:15:48.089022] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:11.497 [2024-12-09 05:15:48.089026] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:11.497 [2024-12-09 05:15:48.089029] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:11.497 [2024-12-09 05:15:48.089058] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.089063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.089066] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2282690) 00:21:11.497 [2024-12-09 05:15:48.089076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:11.497 [2024-12-09 05:15:48.089095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4100, cid 0, qid 0 00:21:11.497 [2024-12-09 05:15:48.096007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.497 [2024-12-09 05:15:48.096016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.497 [2024-12-09 05:15:48.096019] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.096023] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4100) on tqpair=0x2282690 00:21:11.497 [2024-12-09 05:15:48.096033] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:11.497 [2024-12-09 05:15:48.096040] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:11.497 [2024-12-09 05:15:48.096045] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:11.497 [2024-12-09 05:15:48.096056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.096060] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.096064] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2282690) 00:21:11.497 [2024-12-09 05:15:48.096071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.497 [2024-12-09 05:15:48.096085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4100, cid 0, qid 0 00:21:11.497 [2024-12-09 05:15:48.096250] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.497 [2024-12-09 05:15:48.096256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.497 [2024-12-09 05:15:48.096259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.096263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4100) on tqpair=0x2282690 00:21:11.497 [2024-12-09 05:15:48.096269] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:11.497 [2024-12-09 05:15:48.096276] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:11.497 [2024-12-09 05:15:48.096283] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.096286] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.096289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2282690) 00:21:11.497 [2024-12-09 05:15:48.096295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.497 [2024-12-09 05:15:48.096306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4100, cid 0, qid 0 00:21:11.497 [2024-12-09 05:15:48.096377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.497 [2024-12-09 05:15:48.096383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.497 [2024-12-09 05:15:48.096389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.096393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4100) on tqpair=0x2282690 00:21:11.497 [2024-12-09 05:15:48.096397] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:11.497 [2024-12-09 05:15:48.096404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:11.497 [2024-12-09 05:15:48.096410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.096413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.096417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2282690) 00:21:11.497 [2024-12-09 05:15:48.096422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.497 [2024-12-09 05:15:48.096432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4100, cid 0, qid 0 00:21:11.497 [2024-12-09 05:15:48.096500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.497 [2024-12-09 05:15:48.096505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.497 [2024-12-09 05:15:48.096509] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.096512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4100) on tqpair=0x2282690 00:21:11.497 [2024-12-09 05:15:48.096516] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:11.497 [2024-12-09 05:15:48.096524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.096528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.096531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2282690) 00:21:11.497 [2024-12-09 05:15:48.096537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.497 [2024-12-09 05:15:48.096546] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4100, cid 0, qid 0 00:21:11.497 [2024-12-09 05:15:48.096615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.497 [2024-12-09 05:15:48.096621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.497 [2024-12-09 05:15:48.096624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.096627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4100) on tqpair=0x2282690 00:21:11.497 [2024-12-09 05:15:48.096631] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:11.497 [2024-12-09 05:15:48.096636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:11.497 [2024-12-09 05:15:48.096643] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:11.497 [2024-12-09 05:15:48.096748] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:11.497 [2024-12-09 05:15:48.096753] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:11.497 [2024-12-09 05:15:48.096759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.096762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.096765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2282690) 00:21:11.497 [2024-12-09 05:15:48.096771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.497 [2024-12-09 05:15:48.096781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4100, cid 0, qid 0 00:21:11.497 [2024-12-09 05:15:48.096848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.497 [2024-12-09 05:15:48.096854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.497 [2024-12-09 05:15:48.096857] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.096860] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4100) on tqpair=0x2282690 00:21:11.497 [2024-12-09 05:15:48.096864] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:11.497 [2024-12-09 05:15:48.096873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.096876] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.096879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2282690) 00:21:11.497 [2024-12-09 05:15:48.096885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.497 [2024-12-09 05:15:48.096895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4100, cid 0, qid 0 00:21:11.497 [2024-12-09 05:15:48.096973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.497 [2024-12-09 05:15:48.096979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.497 [2024-12-09 05:15:48.096982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.096986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4100) on tqpair=0x2282690 00:21:11.497 [2024-12-09 05:15:48.096990] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:11.497 [2024-12-09 05:15:48.096994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:11.497 [2024-12-09 05:15:48.097005] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:11.497 [2024-12-09 05:15:48.097014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:11.497 [2024-12-09 05:15:48.097022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.097025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2282690) 00:21:11.497 [2024-12-09 05:15:48.097031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.497 [2024-12-09 05:15:48.097041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4100, cid 0, qid 0 00:21:11.497 [2024-12-09 05:15:48.097143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:11.497 [2024-12-09 05:15:48.097149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:11.497 [2024-12-09 05:15:48.097152] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:11.497 [2024-12-09 05:15:48.097155] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2282690): datao=0, datal=4096, cccid=0 00:21:11.497 [2024-12-09 05:15:48.097159] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22e4100) on tqpair(0x2282690): expected_datao=0, payload_size=4096 00:21:11.498 [2024-12-09 05:15:48.097163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097169] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097173] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.498 [2024-12-09 05:15:48.097194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.498 [2024-12-09 05:15:48.097197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4100) on tqpair=0x2282690 00:21:11.498 [2024-12-09 05:15:48.097208] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:11.498 [2024-12-09 05:15:48.097213] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:11.498 [2024-12-09 05:15:48.097217] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:11.498 [2024-12-09 05:15:48.097220] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:11.498 [2024-12-09 05:15:48.097224] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:11.498 [2024-12-09 05:15:48.097229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:11.498 [2024-12-09 05:15:48.097237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:11.498 [2024-12-09 05:15:48.097242] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097246] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2282690) 00:21:11.498 [2024-12-09 05:15:48.097255] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:11.498 [2024-12-09 05:15:48.097265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4100, cid 0, qid 0 00:21:11.498 [2024-12-09 05:15:48.097338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.498 [2024-12-09 05:15:48.097343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.498 [2024-12-09 05:15:48.097347] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4100) on tqpair=0x2282690 00:21:11.498 [2024-12-09 05:15:48.097356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2282690) 00:21:11.498 [2024-12-09 05:15:48.097367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.498 [2024-12-09 05:15:48.097372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097376] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2282690) 00:21:11.498 [2024-12-09 05:15:48.097384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.498 [2024-12-09 05:15:48.097389] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097392] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2282690) 00:21:11.498 [2024-12-09 05:15:48.097400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.498 [2024-12-09 05:15:48.097405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282690) 00:21:11.498 [2024-12-09 05:15:48.097417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.498 [2024-12-09 05:15:48.097421] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:11.498 [2024-12-09 05:15:48.097432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:11.498 [2024-12-09 05:15:48.097438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097441] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2282690) 00:21:11.498 [2024-12-09 05:15:48.097447] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.498 [2024-12-09 05:15:48.097459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4100, cid 0, qid 0 00:21:11.498 [2024-12-09 05:15:48.097463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4280, cid 1, qid 0 00:21:11.498 [2024-12-09 05:15:48.097468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4400, cid 2, qid 0 00:21:11.498 [2024-12-09 05:15:48.097472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4580, cid 3, qid 0 00:21:11.498 [2024-12-09 05:15:48.097476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4700, cid 4, qid 0 00:21:11.498 [2024-12-09 05:15:48.097576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.498 [2024-12-09 05:15:48.097582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.498 [2024-12-09 05:15:48.097585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4700) on tqpair=0x2282690 00:21:11.498 [2024-12-09 05:15:48.097593] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:11.498 [2024-12-09 05:15:48.097597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:11.498 [2024-12-09 05:15:48.097606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:11.498 [2024-12-09 05:15:48.097612] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:11.498 [2024-12-09 05:15:48.097617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097621] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097624] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2282690) 00:21:11.498 [2024-12-09 05:15:48.097630] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:11.498 [2024-12-09 05:15:48.097640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4700, cid 4, qid 0 00:21:11.498 [2024-12-09 05:15:48.097716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.498 [2024-12-09 05:15:48.097722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.498 [2024-12-09 05:15:48.097725] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4700) on tqpair=0x2282690 00:21:11.498 [2024-12-09 05:15:48.097780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:11.498 [2024-12-09 05:15:48.097790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:11.498 [2024-12-09 05:15:48.097797] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097800] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2282690) 00:21:11.498 [2024-12-09 05:15:48.097806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.498 [2024-12-09 05:15:48.097816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4700, cid 4, qid 0 00:21:11.498 [2024-12-09 05:15:48.097906] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:11.498 [2024-12-09 05:15:48.097912] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:11.498 [2024-12-09 05:15:48.097915] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097918] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2282690): datao=0, datal=4096, cccid=4 00:21:11.498 [2024-12-09 05:15:48.097922] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22e4700) on tqpair(0x2282690): expected_datao=0, payload_size=4096 00:21:11.498 [2024-12-09 05:15:48.097926] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097932] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097935] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097976] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.498 [2024-12-09 05:15:48.097982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.498 [2024-12-09 05:15:48.097985] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.097988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4700) on tqpair=0x2282690 00:21:11.498 [2024-12-09 05:15:48.098004] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:11.498 [2024-12-09 05:15:48.098017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:11.498 [2024-12-09 05:15:48.098027] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:11.498 [2024-12-09 05:15:48.098034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.098037] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2282690) 00:21:11.498 [2024-12-09 05:15:48.098043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.498 [2024-12-09 05:15:48.098054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4700, cid 4, qid 0 00:21:11.498 [2024-12-09 05:15:48.098156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:11.498 [2024-12-09 05:15:48.098162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:11.498 [2024-12-09 05:15:48.098165] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.098168] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2282690): datao=0, datal=4096, cccid=4 00:21:11.498 [2024-12-09 05:15:48.098172] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22e4700) on tqpair(0x2282690): expected_datao=0, payload_size=4096 00:21:11.498 [2024-12-09 05:15:48.098176] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.098182] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.098185] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:11.498 [2024-12-09 05:15:48.098201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.499 [2024-12-09 05:15:48.098206] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.499 [2024-12-09 05:15:48.098209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.098213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4700) on tqpair=0x2282690 00:21:11.499 [2024-12-09 05:15:48.098222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:11.499 [2024-12-09 05:15:48.098231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:11.499 [2024-12-09 05:15:48.098237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.098243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2282690) 00:21:11.499 [2024-12-09 05:15:48.098249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.499 [2024-12-09 05:15:48.098259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4700, cid 4, qid 0 00:21:11.499 [2024-12-09 05:15:48.098339] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:11.499 [2024-12-09 05:15:48.098345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:11.499 [2024-12-09 05:15:48.098348] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.098351] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2282690): datao=0, datal=4096, cccid=4 00:21:11.499 [2024-12-09 05:15:48.098355] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22e4700) on tqpair(0x2282690): expected_datao=0, payload_size=4096 00:21:11.499 [2024-12-09 05:15:48.098359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.098364] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.098368] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.098397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.499 [2024-12-09 05:15:48.098402] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.499 [2024-12-09 05:15:48.098405] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.098408] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4700) on tqpair=0x2282690 00:21:11.499 [2024-12-09 05:15:48.098417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:11.499 [2024-12-09 05:15:48.098425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:11.499 [2024-12-09 05:15:48.098432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:11.499 [2024-12-09 05:15:48.098438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:11.499 [2024-12-09 05:15:48.098443] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:11.499 [2024-12-09 05:15:48.098448] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:11.499 [2024-12-09 05:15:48.098452] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:11.499 [2024-12-09 05:15:48.098456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:11.499 [2024-12-09 05:15:48.098461] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:11.499 [2024-12-09 05:15:48.098473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.098477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2282690) 00:21:11.499 [2024-12-09 05:15:48.098482] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.499 [2024-12-09 05:15:48.098488] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.098491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.098494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2282690) 00:21:11.499 [2024-12-09 05:15:48.098499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.499 [2024-12-09 05:15:48.098512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4700, cid 4, qid 0 00:21:11.499 [2024-12-09 05:15:48.098519] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4880, cid 5, qid 0 00:21:11.499 [2024-12-09 05:15:48.098597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.499 [2024-12-09 05:15:48.098603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.499 [2024-12-09 05:15:48.098606] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.098610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4700) on tqpair=0x2282690 00:21:11.499 [2024-12-09 05:15:48.098615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.499 [2024-12-09 05:15:48.098620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.499 [2024-12-09 05:15:48.098623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.098627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4880) on tqpair=0x2282690 00:21:11.499 [2024-12-09 05:15:48.098635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.098638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2282690) 00:21:11.499 [2024-12-09 05:15:48.098644] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.499 [2024-12-09 05:15:48.098653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4880, cid 5, qid 0 00:21:11.499 [2024-12-09 05:15:48.098724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.499 [2024-12-09 05:15:48.098730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.499 [2024-12-09 05:15:48.098733] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.098736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4880) on tqpair=0x2282690 00:21:11.499 [2024-12-09 05:15:48.098744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.098748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2282690) 00:21:11.499 [2024-12-09 05:15:48.098753] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.499 [2024-12-09 05:15:48.098762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4880, cid 5, qid 0 00:21:11.499 [2024-12-09 05:15:48.098832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.499 [2024-12-09 05:15:48.098838] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.499 [2024-12-09 05:15:48.098841] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.098844] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4880) on tqpair=0x2282690 00:21:11.499 [2024-12-09 05:15:48.098852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.098855] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2282690) 00:21:11.499 [2024-12-09 05:15:48.098861] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.499 [2024-12-09 05:15:48.098871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4880, cid 5, qid 0 00:21:11.499 [2024-12-09 05:15:48.102006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.499 [2024-12-09 05:15:48.102015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.499 [2024-12-09 05:15:48.102018] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.102022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4880) on tqpair=0x2282690 00:21:11.499 [2024-12-09 05:15:48.102037] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.102041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2282690) 00:21:11.499 [2024-12-09 05:15:48.102047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.499 [2024-12-09 05:15:48.102057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.102060] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2282690) 00:21:11.499 [2024-12-09 05:15:48.102066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.499 [2024-12-09 05:15:48.102072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.102075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2282690) 00:21:11.499 [2024-12-09 05:15:48.102080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.499 [2024-12-09 05:15:48.102087] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.499 [2024-12-09 05:15:48.102090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2282690) 00:21:11.499 [2024-12-09 05:15:48.102095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.500 [2024-12-09 05:15:48.102108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4880, cid 5, qid 0 00:21:11.500 [2024-12-09 05:15:48.102112] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4700, cid 4, qid 0 00:21:11.500 [2024-12-09 05:15:48.102117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4a00, cid 6, qid 0 00:21:11.500 [2024-12-09 05:15:48.102121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4b80, cid 7, qid 0 00:21:11.500 [2024-12-09 05:15:48.102355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:11.500 [2024-12-09 05:15:48.102362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:11.500 [2024-12-09 05:15:48.102365] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:11.500 [2024-12-09 05:15:48.102368] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2282690): datao=0, datal=8192, cccid=5 00:21:11.500 [2024-12-09 05:15:48.102372] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22e4880) on tqpair(0x2282690): expected_datao=0, payload_size=8192 00:21:11.500 [2024-12-09 05:15:48.102376] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.500 [2024-12-09 05:15:48.102407] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:11.500 [2024-12-09 05:15:48.102411] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:11.500 [2024-12-09 05:15:48.102416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:11.500 [2024-12-09 05:15:48.102421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:11.500 [2024-12-09 05:15:48.102424] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:11.500 [2024-12-09 05:15:48.102427] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2282690): datao=0, datal=512, cccid=4 00:21:11.500 [2024-12-09 05:15:48.102431] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22e4700) on tqpair(0x2282690): expected_datao=0, payload_size=512 00:21:11.500 [2024-12-09 05:15:48.102434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.500 [2024-12-09 05:15:48.102440] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:11.500 [2024-12-09 05:15:48.102443] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:11.500 [2024-12-09 05:15:48.102448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:11.500 [2024-12-09 05:15:48.102453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:11.500 [2024-12-09 05:15:48.102455] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:11.500 [2024-12-09 05:15:48.102459] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2282690): datao=0, datal=512, cccid=6 00:21:11.500 [2024-12-09 05:15:48.102463] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22e4a00) on tqpair(0x2282690): expected_datao=0, payload_size=512 00:21:11.500 [2024-12-09 05:15:48.102469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.500 [2024-12-09 05:15:48.102474] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:11.500 [2024-12-09 05:15:48.102477] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:11.500 [2024-12-09 05:15:48.102482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:11.500 [2024-12-09 05:15:48.102487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:11.500 [2024-12-09 05:15:48.102490] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:11.500 [2024-12-09 05:15:48.102493] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2282690): datao=0, datal=4096, cccid=7 00:21:11.500 [2024-12-09 05:15:48.102497] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22e4b80) on tqpair(0x2282690): expected_datao=0, payload_size=4096 00:21:11.500 [2024-12-09 05:15:48.102500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.500 [2024-12-09 05:15:48.102506] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:11.500 [2024-12-09 05:15:48.102509] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:11.500 [2024-12-09 05:15:48.102516] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.500 [2024-12-09 05:15:48.102521] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.500 [2024-12-09 05:15:48.102524] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.500 [2024-12-09 05:15:48.102528] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4880) on tqpair=0x2282690 00:21:11.500 [2024-12-09 05:15:48.102538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.500 [2024-12-09 05:15:48.102543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.500 [2024-12-09 05:15:48.102546] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.500 [2024-12-09 05:15:48.102549] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4700) on tqpair=0x2282690 00:21:11.500 [2024-12-09 05:15:48.102557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.500 [2024-12-09 05:15:48.102563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.500 [2024-12-09 05:15:48.102566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.500 [2024-12-09 05:15:48.102569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4a00) on tqpair=0x2282690 00:21:11.500 [2024-12-09 05:15:48.102575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.500 [2024-12-09 05:15:48.102580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.500 [2024-12-09 05:15:48.102583] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.500 [2024-12-09 05:15:48.102586] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4b80) on tqpair=0x2282690 00:21:11.500 ===================================================== 00:21:11.500 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:11.500 ===================================================== 00:21:11.500 Controller Capabilities/Features 00:21:11.500 ================================ 00:21:11.500 Vendor ID: 8086 00:21:11.500 Subsystem Vendor ID: 8086 00:21:11.500 Serial Number: SPDK00000000000001 00:21:11.500 Model Number: SPDK bdev Controller 00:21:11.500 Firmware Version: 25.01 00:21:11.500 Recommended Arb Burst: 6 00:21:11.500 IEEE OUI Identifier: e4 d2 5c 00:21:11.500 Multi-path I/O 00:21:11.500 May have multiple subsystem ports: Yes 00:21:11.500 May have multiple controllers: Yes 00:21:11.500 Associated with SR-IOV VF: No 00:21:11.500 Max Data Transfer Size: 131072 00:21:11.500 Max Number of Namespaces: 32 00:21:11.500 Max Number of I/O Queues: 127 00:21:11.500 NVMe Specification Version (VS): 1.3 00:21:11.500 NVMe Specification Version (Identify): 1.3 00:21:11.500 Maximum Queue Entries: 128 00:21:11.500 Contiguous Queues Required: Yes 00:21:11.500 Arbitration Mechanisms Supported 00:21:11.500 Weighted Round Robin: Not Supported 00:21:11.500 Vendor Specific: Not Supported 00:21:11.500 Reset Timeout: 15000 ms 00:21:11.500 Doorbell Stride: 4 bytes 00:21:11.500 NVM Subsystem Reset: Not Supported 00:21:11.500 Command Sets Supported 00:21:11.500 NVM Command Set: Supported 00:21:11.500 Boot Partition: Not Supported 00:21:11.500 Memory Page Size Minimum: 4096 bytes 00:21:11.500 Memory Page Size Maximum: 4096 bytes 00:21:11.500 Persistent Memory Region: Not Supported 00:21:11.500 Optional Asynchronous Events Supported 00:21:11.500 Namespace Attribute Notices: Supported 00:21:11.500 Firmware Activation Notices: Not Supported 00:21:11.500 ANA Change Notices: Not Supported 00:21:11.500 PLE Aggregate Log Change Notices: Not Supported 00:21:11.500 LBA Status Info Alert Notices: Not Supported 00:21:11.500 EGE Aggregate Log Change Notices: Not Supported 00:21:11.500 Normal NVM Subsystem Shutdown event: Not Supported 00:21:11.500 Zone Descriptor Change Notices: Not Supported 00:21:11.500 Discovery Log Change Notices: Not Supported 00:21:11.500 Controller Attributes 00:21:11.500 128-bit Host Identifier: Supported 00:21:11.500 Non-Operational Permissive Mode: Not Supported 00:21:11.500 NVM Sets: Not Supported 00:21:11.500 Read Recovery Levels: Not Supported 00:21:11.500 Endurance Groups: Not Supported 00:21:11.500 Predictable Latency Mode: Not Supported 00:21:11.500 Traffic Based Keep ALive: Not Supported 00:21:11.500 Namespace Granularity: Not Supported 00:21:11.500 SQ Associations: Not Supported 00:21:11.500 UUID List: Not Supported 00:21:11.500 Multi-Domain Subsystem: Not Supported 00:21:11.500 Fixed Capacity Management: Not Supported 00:21:11.500 Variable Capacity Management: Not Supported 00:21:11.500 Delete Endurance Group: Not Supported 00:21:11.500 Delete NVM Set: Not Supported 00:21:11.500 Extended LBA Formats Supported: Not Supported 00:21:11.500 Flexible Data Placement Supported: Not Supported 00:21:11.500 00:21:11.500 Controller Memory Buffer Support 00:21:11.500 ================================ 00:21:11.500 Supported: No 00:21:11.500 00:21:11.500 Persistent Memory Region Support 00:21:11.500 ================================ 00:21:11.500 Supported: No 00:21:11.500 00:21:11.500 Admin Command Set Attributes 00:21:11.500 ============================ 00:21:11.500 Security Send/Receive: Not Supported 00:21:11.500 Format NVM: Not Supported 00:21:11.500 Firmware Activate/Download: Not Supported 00:21:11.500 Namespace Management: Not Supported 00:21:11.500 Device Self-Test: Not Supported 00:21:11.500 Directives: Not Supported 00:21:11.500 NVMe-MI: Not Supported 00:21:11.500 Virtualization Management: Not Supported 00:21:11.500 Doorbell Buffer Config: Not Supported 00:21:11.500 Get LBA Status Capability: Not Supported 00:21:11.500 Command & Feature Lockdown Capability: Not Supported 00:21:11.500 Abort Command Limit: 4 00:21:11.500 Async Event Request Limit: 4 00:21:11.500 Number of Firmware Slots: N/A 00:21:11.500 Firmware Slot 1 Read-Only: N/A 00:21:11.500 Firmware Activation Without Reset: N/A 00:21:11.500 Multiple Update Detection Support: N/A 00:21:11.500 Firmware Update Granularity: No Information Provided 00:21:11.500 Per-Namespace SMART Log: No 00:21:11.500 Asymmetric Namespace Access Log Page: Not Supported 00:21:11.500 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:11.500 Command Effects Log Page: Supported 00:21:11.500 Get Log Page Extended Data: Supported 00:21:11.501 Telemetry Log Pages: Not Supported 00:21:11.501 Persistent Event Log Pages: Not Supported 00:21:11.501 Supported Log Pages Log Page: May Support 00:21:11.501 Commands Supported & Effects Log Page: Not Supported 00:21:11.501 Feature Identifiers & Effects Log Page:May Support 00:21:11.501 NVMe-MI Commands & Effects Log Page: May Support 00:21:11.501 Data Area 4 for Telemetry Log: Not Supported 00:21:11.501 Error Log Page Entries Supported: 128 00:21:11.501 Keep Alive: Supported 00:21:11.501 Keep Alive Granularity: 10000 ms 00:21:11.501 00:21:11.501 NVM Command Set Attributes 00:21:11.501 ========================== 00:21:11.501 Submission Queue Entry Size 00:21:11.501 Max: 64 00:21:11.501 Min: 64 00:21:11.501 Completion Queue Entry Size 00:21:11.501 Max: 16 00:21:11.501 Min: 16 00:21:11.501 Number of Namespaces: 32 00:21:11.501 Compare Command: Supported 00:21:11.501 Write Uncorrectable Command: Not Supported 00:21:11.501 Dataset Management Command: Supported 00:21:11.501 Write Zeroes Command: Supported 00:21:11.501 Set Features Save Field: Not Supported 00:21:11.501 Reservations: Supported 00:21:11.501 Timestamp: Not Supported 00:21:11.501 Copy: Supported 00:21:11.501 Volatile Write Cache: Present 00:21:11.501 Atomic Write Unit (Normal): 1 00:21:11.501 Atomic Write Unit (PFail): 1 00:21:11.501 Atomic Compare & Write Unit: 1 00:21:11.501 Fused Compare & Write: Supported 00:21:11.501 Scatter-Gather List 00:21:11.501 SGL Command Set: Supported 00:21:11.501 SGL Keyed: Supported 00:21:11.501 SGL Bit Bucket Descriptor: Not Supported 00:21:11.501 SGL Metadata Pointer: Not Supported 00:21:11.501 Oversized SGL: Not Supported 00:21:11.501 SGL Metadata Address: Not Supported 00:21:11.501 SGL Offset: Supported 00:21:11.501 Transport SGL Data Block: Not Supported 00:21:11.501 Replay Protected Memory Block: Not Supported 00:21:11.501 00:21:11.501 Firmware Slot Information 00:21:11.501 ========================= 00:21:11.501 Active slot: 1 00:21:11.501 Slot 1 Firmware Revision: 25.01 00:21:11.501 00:21:11.501 00:21:11.501 Commands Supported and Effects 00:21:11.501 ============================== 00:21:11.501 Admin Commands 00:21:11.501 -------------- 00:21:11.501 Get Log Page (02h): Supported 00:21:11.501 Identify (06h): Supported 00:21:11.501 Abort (08h): Supported 00:21:11.501 Set Features (09h): Supported 00:21:11.501 Get Features (0Ah): Supported 00:21:11.501 Asynchronous Event Request (0Ch): Supported 00:21:11.501 Keep Alive (18h): Supported 00:21:11.501 I/O Commands 00:21:11.501 ------------ 00:21:11.501 Flush (00h): Supported LBA-Change 00:21:11.501 Write (01h): Supported LBA-Change 00:21:11.501 Read (02h): Supported 00:21:11.501 Compare (05h): Supported 00:21:11.501 Write Zeroes (08h): Supported LBA-Change 00:21:11.501 Dataset Management (09h): Supported LBA-Change 00:21:11.501 Copy (19h): Supported LBA-Change 00:21:11.501 00:21:11.501 Error Log 00:21:11.501 ========= 00:21:11.501 00:21:11.501 Arbitration 00:21:11.501 =========== 00:21:11.501 Arbitration Burst: 1 00:21:11.501 00:21:11.501 Power Management 00:21:11.501 ================ 00:21:11.501 Number of Power States: 1 00:21:11.501 Current Power State: Power State #0 00:21:11.501 Power State #0: 00:21:11.501 Max Power: 0.00 W 00:21:11.501 Non-Operational State: Operational 00:21:11.501 Entry Latency: Not Reported 00:21:11.501 Exit Latency: Not Reported 00:21:11.501 Relative Read Throughput: 0 00:21:11.501 Relative Read Latency: 0 00:21:11.501 Relative Write Throughput: 0 00:21:11.501 Relative Write Latency: 0 00:21:11.501 Idle Power: Not Reported 00:21:11.501 Active Power: Not Reported 00:21:11.501 Non-Operational Permissive Mode: Not Supported 00:21:11.501 00:21:11.501 Health Information 00:21:11.501 ================== 00:21:11.501 Critical Warnings: 00:21:11.501 Available Spare Space: OK 00:21:11.501 Temperature: OK 00:21:11.501 Device Reliability: OK 00:21:11.501 Read Only: No 00:21:11.501 Volatile Memory Backup: OK 00:21:11.501 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:11.501 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:11.501 Available Spare: 0% 00:21:11.501 Available Spare Threshold: 0% 00:21:11.501 Life Percentage Used:[2024-12-09 05:15:48.102666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.501 [2024-12-09 05:15:48.102671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2282690) 00:21:11.501 [2024-12-09 05:15:48.102677] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.501 [2024-12-09 05:15:48.102688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4b80, cid 7, qid 0 00:21:11.501 [2024-12-09 05:15:48.102762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.501 [2024-12-09 05:15:48.102768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.501 [2024-12-09 05:15:48.102771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.501 [2024-12-09 05:15:48.102774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4b80) on tqpair=0x2282690 00:21:11.501 [2024-12-09 05:15:48.102802] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:11.501 [2024-12-09 05:15:48.102812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4100) on tqpair=0x2282690 00:21:11.501 [2024-12-09 05:15:48.102818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.501 [2024-12-09 05:15:48.102825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4280) on tqpair=0x2282690 00:21:11.501 [2024-12-09 05:15:48.102830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.501 [2024-12-09 05:15:48.102834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4400) on tqpair=0x2282690 00:21:11.501 [2024-12-09 05:15:48.102838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.501 [2024-12-09 05:15:48.102842] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4580) on tqpair=0x2282690 00:21:11.501 [2024-12-09 05:15:48.102846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.501 [2024-12-09 05:15:48.102853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.501 [2024-12-09 05:15:48.102856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.501 [2024-12-09 05:15:48.102859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282690) 00:21:11.501 [2024-12-09 05:15:48.102865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.501 [2024-12-09 05:15:48.102877] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4580, cid 3, qid 0 00:21:11.501 [2024-12-09 05:15:48.102945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.501 [2024-12-09 05:15:48.102951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.501 [2024-12-09 05:15:48.102954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.501 [2024-12-09 05:15:48.102957] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4580) on tqpair=0x2282690 00:21:11.501 [2024-12-09 05:15:48.102963] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.501 [2024-12-09 05:15:48.102966] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.501 [2024-12-09 05:15:48.102969] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282690) 00:21:11.501 [2024-12-09 05:15:48.102975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.501 [2024-12-09 05:15:48.102987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4580, cid 3, qid 0 00:21:11.501 [2024-12-09 05:15:48.103064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.501 [2024-12-09 05:15:48.103071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.501 [2024-12-09 05:15:48.103074] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.501 [2024-12-09 05:15:48.103077] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4580) on tqpair=0x2282690 00:21:11.501 [2024-12-09 05:15:48.103081] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:11.501 [2024-12-09 05:15:48.103085] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:11.501 [2024-12-09 05:15:48.103093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.501 [2024-12-09 05:15:48.103096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.501 [2024-12-09 05:15:48.103100] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282690) 00:21:11.501 [2024-12-09 05:15:48.103105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.501 [2024-12-09 05:15:48.103115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4580, cid 3, qid 0 00:21:11.501 [2024-12-09 05:15:48.103182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.501 [2024-12-09 05:15:48.103188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.501 [2024-12-09 05:15:48.103191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.501 [2024-12-09 05:15:48.103196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4580) on tqpair=0x2282690 00:21:11.501 [2024-12-09 05:15:48.103205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.501 [2024-12-09 05:15:48.103208] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.501 [2024-12-09 05:15:48.103211] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282690) 00:21:11.501 [2024-12-09 05:15:48.103217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.501 [2024-12-09 05:15:48.103227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4580, cid 3, qid 0 00:21:11.501 [2024-12-09 05:15:48.103294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.501 [2024-12-09 05:15:48.103300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.502 [2024-12-09 05:15:48.103303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.103306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4580) on tqpair=0x2282690 00:21:11.502 [2024-12-09 05:15:48.103314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.103318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.103321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282690) 00:21:11.502 [2024-12-09 05:15:48.103327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.502 [2024-12-09 05:15:48.103335] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4580, cid 3, qid 0 00:21:11.502 [2024-12-09 05:15:48.103419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.502 [2024-12-09 05:15:48.103424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.502 [2024-12-09 05:15:48.103427] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.103430] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4580) on tqpair=0x2282690 00:21:11.502 [2024-12-09 05:15:48.103439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.103443] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.103446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282690) 00:21:11.502 [2024-12-09 05:15:48.103451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.502 [2024-12-09 05:15:48.103460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4580, cid 3, qid 0 00:21:11.502 [2024-12-09 05:15:48.103527] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.502 [2024-12-09 05:15:48.103533] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.502 [2024-12-09 05:15:48.103536] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.103539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4580) on tqpair=0x2282690 00:21:11.502 [2024-12-09 05:15:48.103547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.103551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.103554] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282690) 00:21:11.502 [2024-12-09 05:15:48.103560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.502 [2024-12-09 05:15:48.103569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4580, cid 3, qid 0 00:21:11.502 [2024-12-09 05:15:48.103636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.502 [2024-12-09 05:15:48.103642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.502 [2024-12-09 05:15:48.103645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.103648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4580) on tqpair=0x2282690 00:21:11.502 [2024-12-09 05:15:48.103657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.103661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.103664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282690) 00:21:11.502 [2024-12-09 05:15:48.103670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.502 [2024-12-09 05:15:48.103679] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4580, cid 3, qid 0 00:21:11.502 [2024-12-09 05:15:48.103744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.502 [2024-12-09 05:15:48.103750] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.502 [2024-12-09 05:15:48.103753] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.103756] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4580) on tqpair=0x2282690 00:21:11.502 [2024-12-09 05:15:48.103764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.103768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.103771] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282690) 00:21:11.502 [2024-12-09 05:15:48.103776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.502 [2024-12-09 05:15:48.103786] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4580, cid 3, qid 0 00:21:11.502 [2024-12-09 05:15:48.103854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.502 [2024-12-09 05:15:48.103860] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.502 [2024-12-09 05:15:48.103863] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.103867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4580) on tqpair=0x2282690 00:21:11.502 [2024-12-09 05:15:48.103875] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.103878] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.103881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282690) 00:21:11.502 [2024-12-09 05:15:48.103887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.502 [2024-12-09 05:15:48.103896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4580, cid 3, qid 0 00:21:11.502 [2024-12-09 05:15:48.103960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.502 [2024-12-09 05:15:48.103966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.502 [2024-12-09 05:15:48.103969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.103972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4580) on tqpair=0x2282690 00:21:11.502 [2024-12-09 05:15:48.103980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.103984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.103987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282690) 00:21:11.502 [2024-12-09 05:15:48.103992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.502 [2024-12-09 05:15:48.104007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4580, cid 3, qid 0 00:21:11.502 [2024-12-09 05:15:48.104080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.502 [2024-12-09 05:15:48.104085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.502 [2024-12-09 05:15:48.104088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.104092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4580) on tqpair=0x2282690 00:21:11.502 [2024-12-09 05:15:48.104099] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.104105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.104108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282690) 00:21:11.502 [2024-12-09 05:15:48.104114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.502 [2024-12-09 05:15:48.104124] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4580, cid 3, qid 0 00:21:11.502 [2024-12-09 05:15:48.104192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.502 [2024-12-09 05:15:48.104198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.502 [2024-12-09 05:15:48.104200] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.104204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4580) on tqpair=0x2282690 00:21:11.502 [2024-12-09 05:15:48.104212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.104215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.104218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282690) 00:21:11.502 [2024-12-09 05:15:48.104224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.502 [2024-12-09 05:15:48.104233] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4580, cid 3, qid 0 00:21:11.502 [2024-12-09 05:15:48.104300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.502 [2024-12-09 05:15:48.104306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.502 [2024-12-09 05:15:48.104309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.104312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4580) on tqpair=0x2282690 00:21:11.502 [2024-12-09 05:15:48.104320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.104324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.104327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282690) 00:21:11.502 [2024-12-09 05:15:48.104333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.502 [2024-12-09 05:15:48.104342] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4580, cid 3, qid 0 00:21:11.502 [2024-12-09 05:15:48.104413] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.502 [2024-12-09 05:15:48.104418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.502 [2024-12-09 05:15:48.104421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.104425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4580) on tqpair=0x2282690 00:21:11.502 [2024-12-09 05:15:48.104432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.104436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.104439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282690) 00:21:11.502 [2024-12-09 05:15:48.104445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.502 [2024-12-09 05:15:48.104454] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4580, cid 3, qid 0 00:21:11.502 [2024-12-09 05:15:48.104521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.502 [2024-12-09 05:15:48.104527] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.502 [2024-12-09 05:15:48.104530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.104533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4580) on tqpair=0x2282690 00:21:11.502 [2024-12-09 05:15:48.104541] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.104544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.502 [2024-12-09 05:15:48.104549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282690) 00:21:11.502 [2024-12-09 05:15:48.104555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.502 [2024-12-09 05:15:48.104564] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4580, cid 3, qid 0 00:21:11.502 [2024-12-09 05:15:48.104634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.503 [2024-12-09 05:15:48.104640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.503 [2024-12-09 05:15:48.104643] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.503 [2024-12-09 05:15:48.104646] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4580) on tqpair=0x2282690 00:21:11.503 [2024-12-09 05:15:48.104654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.503 [2024-12-09 05:15:48.104658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.503 [2024-12-09 05:15:48.104661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282690) 00:21:11.503 [2024-12-09 05:15:48.104666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.503 [2024-12-09 05:15:48.104676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4580, cid 3, qid 0 00:21:11.503 [2024-12-09 05:15:48.104741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.503 [2024-12-09 05:15:48.104747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.503 [2024-12-09 05:15:48.104750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.503 [2024-12-09 05:15:48.104753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4580) on tqpair=0x2282690 00:21:11.503 [2024-12-09 05:15:48.104761] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.503 [2024-12-09 05:15:48.104765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.503 [2024-12-09 05:15:48.104768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282690) 00:21:11.503 [2024-12-09 05:15:48.104773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.503 [2024-12-09 05:15:48.104783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4580, cid 3, qid 0 00:21:11.503 [2024-12-09 05:15:48.104851] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.503 [2024-12-09 05:15:48.104857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.503 [2024-12-09 05:15:48.104860] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.503 [2024-12-09 05:15:48.104863] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4580) on tqpair=0x2282690 00:21:11.503 [2024-12-09 05:15:48.104871] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.503 [2024-12-09 05:15:48.104875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.503 [2024-12-09 05:15:48.104878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282690) 00:21:11.503 [2024-12-09 05:15:48.104883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.503 [2024-12-09 05:15:48.104892] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4580, cid 3, qid 0 00:21:11.503 [2024-12-09 05:15:48.108005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.503 [2024-12-09 05:15:48.108013] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.503 [2024-12-09 05:15:48.108016] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.503 [2024-12-09 05:15:48.108020] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4580) on tqpair=0x2282690 00:21:11.503 [2024-12-09 05:15:48.108030] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:11.503 [2024-12-09 05:15:48.108033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:11.503 [2024-12-09 05:15:48.108036] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2282690) 00:21:11.503 [2024-12-09 05:15:48.108045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.503 [2024-12-09 05:15:48.108056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22e4580, cid 3, qid 0 00:21:11.503 [2024-12-09 05:15:48.108199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:11.503 [2024-12-09 05:15:48.108205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:11.503 [2024-12-09 05:15:48.108208] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:11.503 [2024-12-09 05:15:48.108211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22e4580) on tqpair=0x2282690 00:21:11.503 [2024-12-09 05:15:48.108217] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:21:11.761 0% 00:21:11.761 Data Units Read: 0 00:21:11.761 Data Units Written: 0 00:21:11.761 Host Read Commands: 0 00:21:11.761 Host Write Commands: 0 00:21:11.761 Controller Busy Time: 0 minutes 00:21:11.761 Power Cycles: 0 00:21:11.761 Power On Hours: 0 hours 00:21:11.761 Unsafe Shutdowns: 0 00:21:11.761 Unrecoverable Media Errors: 0 00:21:11.761 Lifetime Error Log Entries: 0 00:21:11.761 Warning Temperature Time: 0 minutes 00:21:11.761 Critical Temperature Time: 0 minutes 00:21:11.761 00:21:11.761 Number of Queues 00:21:11.761 ================ 00:21:11.761 Number of I/O Submission Queues: 127 00:21:11.761 Number of I/O Completion Queues: 127 00:21:11.761 00:21:11.761 Active Namespaces 00:21:11.761 ================= 00:21:11.761 Namespace ID:1 00:21:11.761 Error Recovery Timeout: Unlimited 00:21:11.761 Command Set Identifier: NVM (00h) 00:21:11.761 Deallocate: Supported 00:21:11.761 Deallocated/Unwritten Error: Not Supported 00:21:11.761 Deallocated Read Value: Unknown 00:21:11.761 Deallocate in Write Zeroes: Not Supported 00:21:11.761 Deallocated Guard Field: 0xFFFF 00:21:11.761 Flush: Supported 00:21:11.761 Reservation: Supported 00:21:11.761 Namespace Sharing Capabilities: Multiple Controllers 00:21:11.761 Size (in LBAs): 131072 (0GiB) 00:21:11.761 Capacity (in LBAs): 131072 (0GiB) 00:21:11.761 Utilization (in LBAs): 131072 (0GiB) 00:21:11.761 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:11.761 EUI64: ABCDEF0123456789 00:21:11.761 UUID: 1e8e1058-1266-496e-8a11-4825c298130f 00:21:11.761 Thin Provisioning: Not Supported 00:21:11.761 Per-NS Atomic Units: Yes 00:21:11.761 Atomic Boundary Size (Normal): 0 00:21:11.761 Atomic Boundary Size (PFail): 0 00:21:11.761 Atomic Boundary Offset: 0 00:21:11.761 Maximum Single Source Range Length: 65535 00:21:11.761 Maximum Copy Length: 65535 00:21:11.761 Maximum Source Range Count: 1 00:21:11.761 NGUID/EUI64 Never Reused: No 00:21:11.761 Namespace Write Protected: No 00:21:11.761 Number of LBA Formats: 1 00:21:11.761 Current LBA Format: LBA Format #00 00:21:11.761 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:11.761 00:21:11.761 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:11.761 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:11.761 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.761 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:11.761 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.761 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:11.761 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:11.761 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:11.761 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:11.761 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:11.762 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:11.762 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:11.762 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:11.762 rmmod nvme_tcp 00:21:11.762 rmmod nvme_fabrics 00:21:11.762 rmmod nvme_keyring 00:21:11.762 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:11.762 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:11.762 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:11.762 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3660153 ']' 00:21:11.762 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3660153 00:21:11.762 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3660153 ']' 00:21:11.762 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3660153 00:21:11.762 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:21:11.762 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.762 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3660153 00:21:11.762 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:11.762 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:11.762 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3660153' 00:21:11.762 killing process with pid 3660153 00:21:11.762 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3660153 00:21:11.762 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3660153 00:21:12.020 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:12.020 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:12.020 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:12.020 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:12.020 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:21:12.020 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:12.020 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:21:12.020 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:12.020 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:12.020 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.020 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:12.020 05:15:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:14.554 00:21:14.554 real 0m9.009s 00:21:14.554 user 0m5.962s 00:21:14.554 sys 0m4.529s 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:14.554 ************************************ 00:21:14.554 END TEST nvmf_identify 00:21:14.554 ************************************ 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.554 ************************************ 00:21:14.554 START TEST nvmf_perf 00:21:14.554 ************************************ 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:14.554 * Looking for test storage... 00:21:14.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:14.554 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:14.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.555 --rc genhtml_branch_coverage=1 00:21:14.555 --rc genhtml_function_coverage=1 00:21:14.555 --rc genhtml_legend=1 00:21:14.555 --rc geninfo_all_blocks=1 00:21:14.555 --rc geninfo_unexecuted_blocks=1 00:21:14.555 00:21:14.555 ' 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:14.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.555 --rc genhtml_branch_coverage=1 00:21:14.555 --rc genhtml_function_coverage=1 00:21:14.555 --rc genhtml_legend=1 00:21:14.555 --rc geninfo_all_blocks=1 00:21:14.555 --rc geninfo_unexecuted_blocks=1 00:21:14.555 00:21:14.555 ' 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:14.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.555 --rc genhtml_branch_coverage=1 00:21:14.555 --rc genhtml_function_coverage=1 00:21:14.555 --rc genhtml_legend=1 00:21:14.555 --rc geninfo_all_blocks=1 00:21:14.555 --rc geninfo_unexecuted_blocks=1 00:21:14.555 00:21:14.555 ' 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:14.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.555 --rc genhtml_branch_coverage=1 00:21:14.555 --rc genhtml_function_coverage=1 00:21:14.555 --rc genhtml_legend=1 00:21:14.555 --rc geninfo_all_blocks=1 00:21:14.555 --rc geninfo_unexecuted_blocks=1 00:21:14.555 00:21:14.555 ' 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:14.555 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.556 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.556 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.556 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:14.556 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:14.556 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:14.556 05:15:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:19.828 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:19.828 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:19.828 Found net devices under 0000:86:00.0: cvl_0_0 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:19.828 Found net devices under 0000:86:00.1: cvl_0_1 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.828 05:15:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:19.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:21:19.828 00:21:19.828 --- 10.0.0.2 ping statistics --- 00:21:19.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.828 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:21:19.828 00:21:19.828 --- 10.0.0.1 ping statistics --- 00:21:19.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.828 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3663707 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3663707 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3663707 ']' 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.828 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.829 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.829 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.829 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:19.829 [2024-12-09 05:15:56.313111] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:21:19.829 [2024-12-09 05:15:56.313161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.829 [2024-12-09 05:15:56.383078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:19.829 [2024-12-09 05:15:56.426138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.829 [2024-12-09 05:15:56.426175] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.829 [2024-12-09 05:15:56.426182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.829 [2024-12-09 05:15:56.426189] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.829 [2024-12-09 05:15:56.426197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.829 [2024-12-09 05:15:56.427787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.829 [2024-12-09 05:15:56.427888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.829 [2024-12-09 05:15:56.427992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:19.829 [2024-12-09 05:15:56.427993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.088 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.088 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:21:20.088 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:20.088 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:20.088 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:20.088 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.088 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:20.088 05:15:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:23.370 05:15:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:23.370 05:15:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:23.370 05:15:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:21:23.370 05:15:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:23.629 05:16:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:23.629 05:16:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:21:23.629 05:16:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:23.629 05:16:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:23.629 05:16:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:23.629 [2024-12-09 05:16:00.199067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.629 05:16:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.887 05:16:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:23.887 05:16:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:24.145 05:16:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:24.145 05:16:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:24.404 05:16:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:24.404 [2024-12-09 05:16:01.030169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.662 05:16:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:24.662 05:16:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:21:24.662 05:16:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:21:24.662 05:16:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:24.663 05:16:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:21:26.039 Initializing NVMe Controllers 00:21:26.039 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:21:26.039 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:21:26.039 Initialization complete. Launching workers. 00:21:26.039 ======================================================== 00:21:26.039 Latency(us) 00:21:26.039 Device Information : IOPS MiB/s Average min max 00:21:26.039 PCIE (0000:5e:00.0) NSID 1 from core 0: 97837.90 382.18 326.62 9.51 4258.95 00:21:26.039 ======================================================== 00:21:26.039 Total : 97837.90 382.18 326.62 9.51 4258.95 00:21:26.039 00:21:26.039 05:16:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:27.416 Initializing NVMe Controllers 00:21:27.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:27.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:27.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:27.416 Initialization complete. Launching workers. 00:21:27.416 ======================================================== 00:21:27.416 Latency(us) 00:21:27.416 Device Information : IOPS MiB/s Average min max 00:21:27.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 104.00 0.41 9787.99 125.73 44674.09 00:21:27.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 48.00 0.19 21305.91 6162.95 51871.36 00:21:27.416 ======================================================== 00:21:27.416 Total : 152.00 0.59 13425.23 125.73 51871.36 00:21:27.416 00:21:27.677 05:16:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:29.157 Initializing NVMe Controllers 00:21:29.157 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:29.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:29.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:29.157 Initialization complete. Launching workers. 00:21:29.157 ======================================================== 00:21:29.157 Latency(us) 00:21:29.157 Device Information : IOPS MiB/s Average min max 00:21:29.157 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10651.00 41.61 3005.05 340.28 8205.04 00:21:29.157 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3804.00 14.86 8445.68 5566.20 16168.60 00:21:29.157 ======================================================== 00:21:29.157 Total : 14455.00 56.46 4436.81 340.28 16168.60 00:21:29.157 00:21:29.157 05:16:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:29.157 05:16:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:29.157 05:16:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:31.687 Initializing NVMe Controllers 00:21:31.687 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:31.687 Controller IO queue size 128, less than required. 00:21:31.687 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.687 Controller IO queue size 128, less than required. 00:21:31.687 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:31.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:31.688 Initialization complete. Launching workers. 00:21:31.688 ======================================================== 00:21:31.688 Latency(us) 00:21:31.688 Device Information : IOPS MiB/s Average min max 00:21:31.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1596.57 399.14 81796.31 52668.09 156569.54 00:21:31.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 584.79 146.20 222035.12 72683.89 338850.97 00:21:31.688 ======================================================== 00:21:31.688 Total : 2181.36 545.34 119392.38 52668.09 338850.97 00:21:31.688 00:21:31.688 05:16:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:31.688 No valid NVMe controllers or AIO or URING devices found 00:21:31.688 Initializing NVMe Controllers 00:21:31.688 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:31.688 Controller IO queue size 128, less than required. 00:21:31.688 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.688 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:31.688 Controller IO queue size 128, less than required. 00:21:31.688 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.688 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:31.688 WARNING: Some requested NVMe devices were skipped 00:21:31.946 05:16:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:34.478 Initializing NVMe Controllers 00:21:34.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:34.478 Controller IO queue size 128, less than required. 00:21:34.478 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:34.478 Controller IO queue size 128, less than required. 00:21:34.478 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:34.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:34.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:34.478 Initialization complete. Launching workers. 00:21:34.478 00:21:34.478 ==================== 00:21:34.478 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:34.478 TCP transport: 00:21:34.478 polls: 12948 00:21:34.478 idle_polls: 7970 00:21:34.478 sock_completions: 4978 00:21:34.478 nvme_completions: 6141 00:21:34.478 submitted_requests: 9248 00:21:34.478 queued_requests: 1 00:21:34.478 00:21:34.478 ==================== 00:21:34.478 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:34.478 TCP transport: 00:21:34.478 polls: 12641 00:21:34.478 idle_polls: 7670 00:21:34.478 sock_completions: 4971 00:21:34.478 nvme_completions: 5983 00:21:34.478 submitted_requests: 8996 00:21:34.478 queued_requests: 1 00:21:34.478 ======================================================== 00:21:34.478 Latency(us) 00:21:34.478 Device Information : IOPS MiB/s Average min max 00:21:34.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1534.96 383.74 85922.74 57304.79 148272.64 00:21:34.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1495.46 373.87 85438.47 45287.00 122303.88 00:21:34.478 ======================================================== 00:21:34.478 Total : 3030.43 757.61 85683.76 45287.00 148272.64 00:21:34.478 00:21:34.478 05:16:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:34.478 05:16:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:34.478 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:34.478 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:34.478 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:34.478 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:34.478 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:34.478 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:34.478 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:34.478 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:34.478 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:34.478 rmmod nvme_tcp 00:21:34.478 rmmod nvme_fabrics 00:21:34.478 rmmod nvme_keyring 00:21:34.478 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:34.478 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:34.478 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:34.478 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3663707 ']' 00:21:34.478 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3663707 00:21:34.478 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3663707 ']' 00:21:34.478 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3663707 00:21:34.478 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:21:34.478 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.478 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3663707 00:21:34.737 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:34.737 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:34.738 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3663707' 00:21:34.738 killing process with pid 3663707 00:21:34.738 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3663707 00:21:34.738 05:16:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3663707 00:21:36.113 05:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:36.113 05:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:36.113 05:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:36.113 05:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:21:36.113 05:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:21:36.113 05:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:36.113 05:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:21:36.113 05:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:36.113 05:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:36.113 05:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.114 05:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.114 05:16:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.658 05:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:38.658 00:21:38.658 real 0m24.028s 00:21:38.658 user 1m4.362s 00:21:38.658 sys 0m7.776s 00:21:38.658 05:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.658 05:16:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:38.658 ************************************ 00:21:38.658 END TEST nvmf_perf 00:21:38.658 ************************************ 00:21:38.658 05:16:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:38.658 05:16:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:38.658 05:16:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.658 05:16:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.658 ************************************ 00:21:38.658 START TEST nvmf_fio_host 00:21:38.658 ************************************ 00:21:38.658 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:38.658 * Looking for test storage... 00:21:38.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:38.658 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:38.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.659 --rc genhtml_branch_coverage=1 00:21:38.659 --rc genhtml_function_coverage=1 00:21:38.659 --rc genhtml_legend=1 00:21:38.659 --rc geninfo_all_blocks=1 00:21:38.659 --rc geninfo_unexecuted_blocks=1 00:21:38.659 00:21:38.659 ' 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:38.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.659 --rc genhtml_branch_coverage=1 00:21:38.659 --rc genhtml_function_coverage=1 00:21:38.659 --rc genhtml_legend=1 00:21:38.659 --rc geninfo_all_blocks=1 00:21:38.659 --rc geninfo_unexecuted_blocks=1 00:21:38.659 00:21:38.659 ' 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:38.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.659 --rc genhtml_branch_coverage=1 00:21:38.659 --rc genhtml_function_coverage=1 00:21:38.659 --rc genhtml_legend=1 00:21:38.659 --rc geninfo_all_blocks=1 00:21:38.659 --rc geninfo_unexecuted_blocks=1 00:21:38.659 00:21:38.659 ' 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:38.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.659 --rc genhtml_branch_coverage=1 00:21:38.659 --rc genhtml_function_coverage=1 00:21:38.659 --rc genhtml_legend=1 00:21:38.659 --rc geninfo_all_blocks=1 00:21:38.659 --rc geninfo_unexecuted_blocks=1 00:21:38.659 00:21:38.659 ' 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.659 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:38.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.660 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.661 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.661 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:38.661 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:38.661 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:21:38.661 05:16:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.933 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:43.933 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:21:43.933 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:43.933 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:43.933 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:43.933 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:43.933 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:43.933 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:21:43.933 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:43.933 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:21:43.933 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:21:43.933 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:21:43.933 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:21:43.933 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:21:43.933 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:21:43.933 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:43.933 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:43.934 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:43.934 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:43.934 Found net devices under 0000:86:00.0: cvl_0_0 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:43.934 Found net devices under 0000:86:00.1: cvl_0_1 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:43.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:21:43.934 00:21:43.934 --- 10.0.0.2 ping statistics --- 00:21:43.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.934 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:43.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:21:43.934 00:21:43.934 --- 10.0.0.1 ping statistics --- 00:21:43.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.934 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3669810 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3669810 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3669810 ']' 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.934 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:43.935 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.935 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:43.935 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.935 [2024-12-09 05:16:20.536169] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:21:43.935 [2024-12-09 05:16:20.536215] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.194 [2024-12-09 05:16:20.606305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:44.194 [2024-12-09 05:16:20.649539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.194 [2024-12-09 05:16:20.649578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.194 [2024-12-09 05:16:20.649586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.194 [2024-12-09 05:16:20.649592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.194 [2024-12-09 05:16:20.649598] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.194 [2024-12-09 05:16:20.651203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.194 [2024-12-09 05:16:20.651301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.194 [2024-12-09 05:16:20.651557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:44.194 [2024-12-09 05:16:20.651559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.194 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.194 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:21:44.194 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:44.453 [2024-12-09 05:16:20.934879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.453 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:44.453 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:44.453 05:16:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.453 05:16:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:44.712 Malloc1 00:21:44.712 05:16:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:44.971 05:16:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:44.971 05:16:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:45.230 [2024-12-09 05:16:21.775399] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.230 05:16:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:45.489 05:16:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:45.489 05:16:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:45.489 05:16:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:45.489 05:16:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:45.489 05:16:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:45.489 05:16:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:45.489 05:16:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:45.489 05:16:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:45.489 05:16:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:45.489 05:16:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:45.489 05:16:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:45.489 05:16:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:45.489 05:16:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:45.489 05:16:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:45.489 05:16:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:45.489 05:16:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:45.489 05:16:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:45.489 05:16:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:45.489 05:16:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:45.489 05:16:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:45.489 05:16:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:45.489 05:16:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:45.489 05:16:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:45.748 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:45.748 fio-3.35 00:21:45.748 Starting 1 thread 00:21:48.282 00:21:48.282 test: (groupid=0, jobs=1): err= 0: pid=3670401: Mon Dec 9 05:16:24 2024 00:21:48.282 read: IOPS=11.5k, BW=45.1MiB/s (47.3MB/s)(90.4MiB/2005msec) 00:21:48.282 slat (nsec): min=1545, max=242699, avg=1733.23, stdev=2251.00 00:21:48.282 clat (usec): min=3092, max=10167, avg=6141.13, stdev=479.85 00:21:48.282 lat (usec): min=3125, max=10169, avg=6142.87, stdev=479.81 00:21:48.282 clat percentiles (usec): 00:21:48.282 | 1.00th=[ 4948], 5.00th=[ 5407], 10.00th=[ 5538], 20.00th=[ 5800], 00:21:48.282 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6128], 60.00th=[ 6259], 00:21:48.282 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6849], 00:21:48.282 | 99.00th=[ 7177], 99.50th=[ 7373], 99.90th=[ 8848], 99.95th=[ 9896], 00:21:48.282 | 99.99th=[10159] 00:21:48.282 bw ( KiB/s): min=45408, max=46720, per=99.98%, avg=46154.00, stdev=560.28, samples=4 00:21:48.282 iops : min=11352, max=11680, avg=11538.50, stdev=140.07, samples=4 00:21:48.282 write: IOPS=11.5k, BW=44.8MiB/s (47.0MB/s)(89.8MiB/2005msec); 0 zone resets 00:21:48.282 slat (nsec): min=1582, max=228440, avg=1803.03, stdev=1688.09 00:21:48.282 clat (usec): min=2441, max=9977, avg=4943.92, stdev=407.41 00:21:48.282 lat (usec): min=2455, max=9979, avg=4945.73, stdev=407.49 00:21:48.282 clat percentiles (usec): 00:21:48.282 | 1.00th=[ 4047], 5.00th=[ 4293], 10.00th=[ 4490], 20.00th=[ 4621], 00:21:48.282 | 30.00th=[ 4752], 40.00th=[ 4883], 50.00th=[ 4948], 60.00th=[ 5014], 00:21:48.282 | 70.00th=[ 5145], 80.00th=[ 5211], 90.00th=[ 5407], 95.00th=[ 5538], 00:21:48.282 | 99.00th=[ 5800], 99.50th=[ 6063], 99.90th=[ 8160], 99.95th=[ 8979], 00:21:48.282 | 99.99th=[ 9896] 00:21:48.282 bw ( KiB/s): min=45464, max=46272, per=99.97%, avg=45844.00, stdev=335.65, samples=4 00:21:48.282 iops : min=11366, max=11568, avg=11461.00, stdev=83.91, samples=4 00:21:48.282 lat (msec) : 4=0.42%, 10=99.56%, 20=0.02% 00:21:48.282 cpu : usr=71.11%, sys=27.25%, ctx=92, majf=0, minf=2 00:21:48.282 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:48.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:48.282 issued rwts: total=23140,22986,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.282 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:48.282 00:21:48.282 Run status group 0 (all jobs): 00:21:48.282 READ: bw=45.1MiB/s (47.3MB/s), 45.1MiB/s-45.1MiB/s (47.3MB/s-47.3MB/s), io=90.4MiB (94.8MB), run=2005-2005msec 00:21:48.282 WRITE: bw=44.8MiB/s (47.0MB/s), 44.8MiB/s-44.8MiB/s (47.0MB/s-47.0MB/s), io=89.8MiB (94.1MB), run=2005-2005msec 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:48.282 05:16:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:48.540 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:48.540 fio-3.35 00:21:48.540 Starting 1 thread 00:21:51.068 00:21:51.068 test: (groupid=0, jobs=1): err= 0: pid=3670972: Mon Dec 9 05:16:27 2024 00:21:51.068 read: IOPS=10.8k, BW=168MiB/s (176MB/s)(338MiB/2009msec) 00:21:51.068 slat (usec): min=2, max=100, avg= 2.86, stdev= 1.33 00:21:51.068 clat (usec): min=2038, max=13625, avg=6967.57, stdev=1701.25 00:21:51.068 lat (usec): min=2040, max=13628, avg=6970.42, stdev=1701.36 00:21:51.068 clat percentiles (usec): 00:21:51.068 | 1.00th=[ 3621], 5.00th=[ 4359], 10.00th=[ 4817], 20.00th=[ 5473], 00:21:51.068 | 30.00th=[ 5932], 40.00th=[ 6390], 50.00th=[ 6915], 60.00th=[ 7439], 00:21:51.068 | 70.00th=[ 7767], 80.00th=[ 8455], 90.00th=[ 8979], 95.00th=[ 9896], 00:21:51.068 | 99.00th=[11469], 99.50th=[12125], 99.90th=[12649], 99.95th=[12780], 00:21:51.068 | 99.99th=[12911] 00:21:51.068 bw ( KiB/s): min=79840, max=95552, per=50.12%, avg=86272.00, stdev=7058.06, samples=4 00:21:51.068 iops : min= 4990, max= 5972, avg=5392.00, stdev=441.13, samples=4 00:21:51.068 write: IOPS=6219, BW=97.2MiB/s (102MB/s)(176MiB/1812msec); 0 zone resets 00:21:51.068 slat (usec): min=29, max=386, avg=31.80, stdev= 7.44 00:21:51.068 clat (usec): min=2508, max=15232, avg=8726.71, stdev=1521.50 00:21:51.068 lat (usec): min=2540, max=15343, avg=8758.51, stdev=1523.04 00:21:51.068 clat percentiles (usec): 00:21:51.068 | 1.00th=[ 5735], 5.00th=[ 6652], 10.00th=[ 7046], 20.00th=[ 7439], 00:21:51.068 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8586], 60.00th=[ 8979], 00:21:51.068 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[10814], 95.00th=[11469], 00:21:51.068 | 99.00th=[12518], 99.50th=[13173], 99.90th=[14877], 99.95th=[15008], 00:21:51.068 | 99.99th=[15270] 00:21:51.068 bw ( KiB/s): min=83552, max=99328, per=90.15%, avg=89704.00, stdev=7363.12, samples=4 00:21:51.068 iops : min= 5222, max= 6208, avg=5606.50, stdev=460.20, samples=4 00:21:51.068 lat (msec) : 4=1.61%, 10=88.62%, 20=9.76% 00:21:51.068 cpu : usr=86.55%, sys=12.25%, ctx=45, majf=0, minf=2 00:21:51.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:51.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:51.068 issued rwts: total=21612,11269,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:51.068 00:21:51.068 Run status group 0 (all jobs): 00:21:51.068 READ: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=338MiB (354MB), run=2009-2009msec 00:21:51.068 WRITE: bw=97.2MiB/s (102MB/s), 97.2MiB/s-97.2MiB/s (102MB/s-102MB/s), io=176MiB (185MB), run=1812-1812msec 00:21:51.068 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:51.327 rmmod nvme_tcp 00:21:51.327 rmmod nvme_fabrics 00:21:51.327 rmmod nvme_keyring 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3669810 ']' 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3669810 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3669810 ']' 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3669810 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3669810 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3669810' 00:21:51.327 killing process with pid 3669810 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3669810 00:21:51.327 05:16:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3669810 00:21:51.586 05:16:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:51.586 05:16:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:51.586 05:16:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:51.586 05:16:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:21:51.586 05:16:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:21:51.586 05:16:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:51.586 05:16:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:21:51.586 05:16:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:51.586 05:16:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:51.586 05:16:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.586 05:16:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.586 05:16:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:54.126 00:21:54.126 real 0m15.401s 00:21:54.126 user 0m46.193s 00:21:54.126 sys 0m6.105s 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.126 ************************************ 00:21:54.126 END TEST nvmf_fio_host 00:21:54.126 ************************************ 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.126 ************************************ 00:21:54.126 START TEST nvmf_failover 00:21:54.126 ************************************ 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:54.126 * Looking for test storage... 00:21:54.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:54.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.126 --rc genhtml_branch_coverage=1 00:21:54.126 --rc genhtml_function_coverage=1 00:21:54.126 --rc genhtml_legend=1 00:21:54.126 --rc geninfo_all_blocks=1 00:21:54.126 --rc geninfo_unexecuted_blocks=1 00:21:54.126 00:21:54.126 ' 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:54.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.126 --rc genhtml_branch_coverage=1 00:21:54.126 --rc genhtml_function_coverage=1 00:21:54.126 --rc genhtml_legend=1 00:21:54.126 --rc geninfo_all_blocks=1 00:21:54.126 --rc geninfo_unexecuted_blocks=1 00:21:54.126 00:21:54.126 ' 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:54.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.126 --rc genhtml_branch_coverage=1 00:21:54.126 --rc genhtml_function_coverage=1 00:21:54.126 --rc genhtml_legend=1 00:21:54.126 --rc geninfo_all_blocks=1 00:21:54.126 --rc geninfo_unexecuted_blocks=1 00:21:54.126 00:21:54.126 ' 00:21:54.126 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:54.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.126 --rc genhtml_branch_coverage=1 00:21:54.126 --rc genhtml_function_coverage=1 00:21:54.126 --rc genhtml_legend=1 00:21:54.126 --rc geninfo_all_blocks=1 00:21:54.126 --rc geninfo_unexecuted_blocks=1 00:21:54.126 00:21:54.126 ' 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:54.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:21:54.127 05:16:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:59.397 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:59.397 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:59.397 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:59.398 Found net devices under 0000:86:00.0: cvl_0_0 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:59.398 Found net devices under 0000:86:00.1: cvl_0_1 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:59.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:21:59.398 00:21:59.398 --- 10.0.0.2 ping statistics --- 00:21:59.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.398 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:59.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:21:59.398 00:21:59.398 --- 10.0.0.1 ping statistics --- 00:21:59.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.398 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3674730 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3674730 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3674730 ']' 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.398 05:16:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:59.398 [2024-12-09 05:16:35.976492] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:21:59.398 [2024-12-09 05:16:35.976538] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.657 [2024-12-09 05:16:36.045753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:59.657 [2024-12-09 05:16:36.087572] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.657 [2024-12-09 05:16:36.087607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.657 [2024-12-09 05:16:36.087614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.657 [2024-12-09 05:16:36.087621] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.657 [2024-12-09 05:16:36.087626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.657 [2024-12-09 05:16:36.089072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.657 [2024-12-09 05:16:36.089159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:59.657 [2024-12-09 05:16:36.089161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.657 05:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.657 05:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:59.657 05:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:59.657 05:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:59.657 05:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:59.657 05:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.657 05:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:59.915 [2024-12-09 05:16:36.398437] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.915 05:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:00.172 Malloc0 00:22:00.172 05:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:00.430 05:16:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:00.430 05:16:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.688 [2024-12-09 05:16:37.221140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.688 05:16:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:00.946 [2024-12-09 05:16:37.417617] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:00.946 05:16:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:01.205 [2024-12-09 05:16:37.618260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:01.205 05:16:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3674992 00:22:01.205 05:16:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:01.205 05:16:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:01.205 05:16:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3674992 /var/tmp/bdevperf.sock 00:22:01.205 05:16:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3674992 ']' 00:22:01.205 05:16:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.205 05:16:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.205 05:16:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.205 05:16:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.205 05:16:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:01.463 05:16:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.463 05:16:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:01.463 05:16:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:01.722 NVMe0n1 00:22:01.722 05:16:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:01.980 00:22:01.980 05:16:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:01.980 05:16:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3675218 00:22:01.980 05:16:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:03.355 05:16:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:03.355 [2024-12-09 05:16:39.799756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeb2d0 is same with the state(6) to be set 00:22:03.355 [2024-12-09 05:16:39.799805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeb2d0 is same with the state(6) to be set 00:22:03.355 [2024-12-09 05:16:39.799814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeb2d0 is same with the state(6) to be set 00:22:03.355 [2024-12-09 05:16:39.799820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeb2d0 is same with the state(6) to be set 00:22:03.356 [2024-12-09 05:16:39.799827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeb2d0 is same with the state(6) to be set 00:22:03.356 [2024-12-09 05:16:39.799834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeb2d0 is same with the state(6) to be set 00:22:03.356 05:16:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:06.639 05:16:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:06.639 00:22:06.898 05:16:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:06.898 [2024-12-09 05:16:43.481748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.481996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.482008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.482014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.482020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.482026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.482032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.482038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.482044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.482050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.482056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.482062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.482068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.482074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.898 [2024-12-09 05:16:43.482080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 [2024-12-09 05:16:43.482320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdebfa0 is same with the state(6) to be set 00:22:06.899 05:16:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:10.188 05:16:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:10.188 [2024-12-09 05:16:46.698936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.188 05:16:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:11.123 05:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:11.381 [2024-12-09 05:16:47.915185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.381 [2024-12-09 05:16:47.915222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 [2024-12-09 05:16:47.915594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdecce0 is same with the state(6) to be set 00:22:11.382 05:16:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3675218 00:22:17.957 { 00:22:17.957 "results": [ 00:22:17.957 { 00:22:17.957 "job": "NVMe0n1", 00:22:17.957 "core_mask": "0x1", 00:22:17.957 "workload": "verify", 00:22:17.957 "status": "finished", 00:22:17.957 "verify_range": { 00:22:17.957 "start": 0, 00:22:17.957 "length": 16384 00:22:17.957 }, 00:22:17.957 "queue_depth": 128, 00:22:17.957 "io_size": 4096, 00:22:17.957 "runtime": 15.045188, 00:22:17.957 "iops": 10512.597117430503, 00:22:17.957 "mibps": 41.0648324899629, 00:22:17.957 "io_failed": 10245, 00:22:17.957 "io_timeout": 0, 00:22:17.957 "avg_latency_us": 11382.328519362927, 00:22:17.957 "min_latency_us": 427.4086956521739, 00:22:17.957 "max_latency_us": 40575.33217391304 00:22:17.957 } 00:22:17.957 ], 00:22:17.957 "core_count": 1 00:22:17.957 } 00:22:17.957 05:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3674992 00:22:17.957 05:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3674992 ']' 00:22:17.957 05:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3674992 00:22:17.957 05:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:17.957 05:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:17.957 05:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3674992 00:22:17.957 05:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:17.957 05:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:17.957 05:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3674992' 00:22:17.957 killing process with pid 3674992 00:22:17.957 05:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3674992 00:22:17.957 05:16:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3674992 00:22:17.957 05:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:17.957 [2024-12-09 05:16:37.694710] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:22:17.957 [2024-12-09 05:16:37.694765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3674992 ] 00:22:17.957 [2024-12-09 05:16:37.761144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.957 [2024-12-09 05:16:37.802937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.957 Running I/O for 15 seconds... 00:22:17.957 10584.00 IOPS, 41.34 MiB/s [2024-12-09T04:16:54.603Z] [2024-12-09 05:16:39.800298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.957 [2024-12-09 05:16:39.800331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.957 [2024-12-09 05:16:39.800346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.957 [2024-12-09 05:16:39.800354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.957 [2024-12-09 05:16:39.800363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.957 [2024-12-09 05:16:39.800370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.957 [2024-12-09 05:16:39.800379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.957 [2024-12-09 05:16:39.800386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.957 [2024-12-09 05:16:39.800394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.957 [2024-12-09 05:16:39.800400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.957 [2024-12-09 05:16:39.800408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.957 [2024-12-09 05:16:39.800415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.957 [2024-12-09 05:16:39.800423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.957 [2024-12-09 05:16:39.800430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.957 [2024-12-09 05:16:39.800438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:93872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.957 [2024-12-09 05:16:39.800445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.957 [2024-12-09 05:16:39.800453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.957 [2024-12-09 05:16:39.800460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.957 [2024-12-09 05:16:39.800467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.957 [2024-12-09 05:16:39.800474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.957 [2024-12-09 05:16:39.800482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.957 [2024-12-09 05:16:39.800489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.957 [2024-12-09 05:16:39.800503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.957 [2024-12-09 05:16:39.800510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.957 [2024-12-09 05:16:39.800519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.957 [2024-12-09 05:16:39.800526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.957 [2024-12-09 05:16:39.800534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.957 [2024-12-09 05:16:39.800541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.957 [2024-12-09 05:16:39.800549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.957 [2024-12-09 05:16:39.800556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.957 [2024-12-09 05:16:39.800563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.957 [2024-12-09 05:16:39.800570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.957 [2024-12-09 05:16:39.800578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.957 [2024-12-09 05:16:39.800586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.957 [2024-12-09 05:16:39.800594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.957 [2024-12-09 05:16:39.800601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.957 [2024-12-09 05:16:39.800609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.957 [2024-12-09 05:16:39.800616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.957 [2024-12-09 05:16:39.800624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.958 [2024-12-09 05:16:39.800812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.800988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.800994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.801009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.801015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.801024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.801030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.801038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.801045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.801054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.801061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.801069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.801081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.801090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.801096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.801104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.801111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.801119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.801126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.801134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.801141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.801149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.801155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.801164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.801171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.801179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.801185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.801193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.801200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.801208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.801215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.801223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.958 [2024-12-09 05:16:39.801229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.958 [2024-12-09 05:16:39.801238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.959 [2024-12-09 05:16:39.801828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.959 [2024-12-09 05:16:39.801834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.801844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.801851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.801860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.801867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.801875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.801882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.801890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.801899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.801907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.801914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.801922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.801929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.801937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.801944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.801952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.801958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.801967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.801973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.801981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.801988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.801996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.802006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.802020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.802036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.802052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.802067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.802083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.802098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.802112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.802127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.802143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.802158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.802172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.802188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.802203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.802217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.802234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.802249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:39.802263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.960 [2024-12-09 05:16:39.802290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.960 [2024-12-09 05:16:39.802297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93864 len:8 PRP1 0x0 PRP2 0x0 00:22:17.960 [2024-12-09 05:16:39.802304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802349] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:17.960 [2024-12-09 05:16:39.802369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.960 [2024-12-09 05:16:39.802377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.960 [2024-12-09 05:16:39.802391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.960 [2024-12-09 05:16:39.802406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.960 [2024-12-09 05:16:39.802419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:39.802426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:17.960 [2024-12-09 05:16:39.805310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:17.960 [2024-12-09 05:16:39.805337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x809370 (9): Bad file descriptor 00:22:17.960 [2024-12-09 05:16:39.830580] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:17.960 10476.50 IOPS, 40.92 MiB/s [2024-12-09T04:16:54.606Z] 10593.67 IOPS, 41.38 MiB/s [2024-12-09T04:16:54.606Z] 10667.25 IOPS, 41.67 MiB/s [2024-12-09T04:16:54.606Z] [2024-12-09 05:16:43.483846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:43.483880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:43.483894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:43.483902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.960 [2024-12-09 05:16:43.483915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.960 [2024-12-09 05:16:43.483923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.483932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.961 [2024-12-09 05:16:43.483939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.483947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.961 [2024-12-09 05:16:43.483954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.483962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.961 [2024-12-09 05:16:43.483969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.483977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.961 [2024-12-09 05:16:43.483985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.483993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.961 [2024-12-09 05:16:43.484005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.961 [2024-12-09 05:16:43.484021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.961 [2024-12-09 05:16:43.484036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.961 [2024-12-09 05:16:43.484051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.961 [2024-12-09 05:16:43.484066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.961 [2024-12-09 05:16:43.484081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.961 [2024-12-09 05:16:43.484096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.961 [2024-12-09 05:16:43.484115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.961 [2024-12-09 05:16:43.484130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.961 [2024-12-09 05:16:43.484145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.961 [2024-12-09 05:16:43.484160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.961 [2024-12-09 05:16:43.484175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.961 [2024-12-09 05:16:43.484190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.961 [2024-12-09 05:16:43.484205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.961 [2024-12-09 05:16:43.484219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.961 [2024-12-09 05:16:43.484235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.961 [2024-12-09 05:16:43.484249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.961 [2024-12-09 05:16:43.484264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.961 [2024-12-09 05:16:43.484279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.961 [2024-12-09 05:16:43.484294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.961 [2024-12-09 05:16:43.484309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.961 [2024-12-09 05:16:43.484324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.961 [2024-12-09 05:16:43.484338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.961 [2024-12-09 05:16:43.484354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.961 [2024-12-09 05:16:43.484369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.961 [2024-12-09 05:16:43.484383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.961 [2024-12-09 05:16:43.484398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.961 [2024-12-09 05:16:43.484412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.961 [2024-12-09 05:16:43.484427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.961 [2024-12-09 05:16:43.484435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.961 [2024-12-09 05:16:43.484441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.962 [2024-12-09 05:16:43.484486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.484987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.484995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.485007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.485016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.485022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.962 [2024-12-09 05:16:43.485030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.962 [2024-12-09 05:16:43.485037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.963 [2024-12-09 05:16:43.485053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.963 [2024-12-09 05:16:43.485069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.963 [2024-12-09 05:16:43.485084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.963 [2024-12-09 05:16:43.485099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.963 [2024-12-09 05:16:43.485114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.963 [2024-12-09 05:16:43.485129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.963 [2024-12-09 05:16:43.485143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.963 [2024-12-09 05:16:43.485158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.963 [2024-12-09 05:16:43.485172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.963 [2024-12-09 05:16:43.485186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.963 [2024-12-09 05:16:43.485201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.963 [2024-12-09 05:16:43.485216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.963 [2024-12-09 05:16:43.485230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.963 [2024-12-09 05:16:43.485246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.963 [2024-12-09 05:16:43.485260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.963 [2024-12-09 05:16:43.485275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.963 [2024-12-09 05:16:43.485289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.963 [2024-12-09 05:16:43.485305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.963 [2024-12-09 05:16:43.485331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24808 len:8 PRP1 0x0 PRP2 0x0 00:22:17.963 [2024-12-09 05:16:43.485337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.963 [2024-12-09 05:16:43.485352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.963 [2024-12-09 05:16:43.485358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24816 len:8 PRP1 0x0 PRP2 0x0 00:22:17.963 [2024-12-09 05:16:43.485364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.963 [2024-12-09 05:16:43.485377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.963 [2024-12-09 05:16:43.485382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24824 len:8 PRP1 0x0 PRP2 0x0 00:22:17.963 [2024-12-09 05:16:43.485389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.963 [2024-12-09 05:16:43.485400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.963 [2024-12-09 05:16:43.485406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:8 PRP1 0x0 PRP2 0x0 00:22:17.963 [2024-12-09 05:16:43.485412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.963 [2024-12-09 05:16:43.485423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.963 [2024-12-09 05:16:43.485429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24840 len:8 PRP1 0x0 PRP2 0x0 00:22:17.963 [2024-12-09 05:16:43.485435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.963 [2024-12-09 05:16:43.485449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.963 [2024-12-09 05:16:43.485454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24848 len:8 PRP1 0x0 PRP2 0x0 00:22:17.963 [2024-12-09 05:16:43.485460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.963 [2024-12-09 05:16:43.485472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.963 [2024-12-09 05:16:43.485478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24856 len:8 PRP1 0x0 PRP2 0x0 00:22:17.963 [2024-12-09 05:16:43.485484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.963 [2024-12-09 05:16:43.485496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.963 [2024-12-09 05:16:43.485501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:8 PRP1 0x0 PRP2 0x0 00:22:17.963 [2024-12-09 05:16:43.485507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.963 [2024-12-09 05:16:43.485520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.963 [2024-12-09 05:16:43.485525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24872 len:8 PRP1 0x0 PRP2 0x0 00:22:17.963 [2024-12-09 05:16:43.485531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.963 [2024-12-09 05:16:43.485543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.963 [2024-12-09 05:16:43.485548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24880 len:8 PRP1 0x0 PRP2 0x0 00:22:17.963 [2024-12-09 05:16:43.485555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.963 [2024-12-09 05:16:43.485566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.963 [2024-12-09 05:16:43.485572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24888 len:8 PRP1 0x0 PRP2 0x0 00:22:17.963 [2024-12-09 05:16:43.485578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.963 [2024-12-09 05:16:43.485589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.963 [2024-12-09 05:16:43.485595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:8 PRP1 0x0 PRP2 0x0 00:22:17.963 [2024-12-09 05:16:43.485601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.963 [2024-12-09 05:16:43.485608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.963 [2024-12-09 05:16:43.485612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.963 [2024-12-09 05:16:43.485617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24904 len:8 PRP1 0x0 PRP2 0x0 00:22:17.963 [2024-12-09 05:16:43.485626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.485632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.964 [2024-12-09 05:16:43.485637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.964 [2024-12-09 05:16:43.485642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24912 len:8 PRP1 0x0 PRP2 0x0 00:22:17.964 [2024-12-09 05:16:43.485649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.485655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.964 [2024-12-09 05:16:43.485660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.964 [2024-12-09 05:16:43.485666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24920 len:8 PRP1 0x0 PRP2 0x0 00:22:17.964 [2024-12-09 05:16:43.485672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.485679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.964 [2024-12-09 05:16:43.485683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.964 [2024-12-09 05:16:43.485689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:8 PRP1 0x0 PRP2 0x0 00:22:17.964 [2024-12-09 05:16:43.485695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.485703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.964 [2024-12-09 05:16:43.485708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.964 [2024-12-09 05:16:43.485713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24936 len:8 PRP1 0x0 PRP2 0x0 00:22:17.964 [2024-12-09 05:16:43.485719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.485726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.964 [2024-12-09 05:16:43.485731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.964 [2024-12-09 05:16:43.485736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24944 len:8 PRP1 0x0 PRP2 0x0 00:22:17.964 [2024-12-09 05:16:43.485743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.485749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.964 [2024-12-09 05:16:43.485754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.964 [2024-12-09 05:16:43.485759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24952 len:8 PRP1 0x0 PRP2 0x0 00:22:17.964 [2024-12-09 05:16:43.485766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.485772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.964 [2024-12-09 05:16:43.485777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.964 [2024-12-09 05:16:43.485782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:8 PRP1 0x0 PRP2 0x0 00:22:17.964 [2024-12-09 05:16:43.485789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.485795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.964 [2024-12-09 05:16:43.485800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.964 [2024-12-09 05:16:43.485807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24968 len:8 PRP1 0x0 PRP2 0x0 00:22:17.964 [2024-12-09 05:16:43.485813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.485820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.964 [2024-12-09 05:16:43.485825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.964 [2024-12-09 05:16:43.485830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24976 len:8 PRP1 0x0 PRP2 0x0 00:22:17.964 [2024-12-09 05:16:43.485836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.485843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.964 [2024-12-09 05:16:43.485847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.964 [2024-12-09 05:16:43.497552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24984 len:8 PRP1 0x0 PRP2 0x0 00:22:17.964 [2024-12-09 05:16:43.497562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.497569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.964 [2024-12-09 05:16:43.497575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.964 [2024-12-09 05:16:43.497581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:8 PRP1 0x0 PRP2 0x0 00:22:17.964 [2024-12-09 05:16:43.497587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.497595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.964 [2024-12-09 05:16:43.497600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.964 [2024-12-09 05:16:43.497606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25000 len:8 PRP1 0x0 PRP2 0x0 00:22:17.964 [2024-12-09 05:16:43.497612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.497619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.964 [2024-12-09 05:16:43.497624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.964 [2024-12-09 05:16:43.497629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25008 len:8 PRP1 0x0 PRP2 0x0 00:22:17.964 [2024-12-09 05:16:43.497636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.497642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.964 [2024-12-09 05:16:43.497647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.964 [2024-12-09 05:16:43.497653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25016 len:8 PRP1 0x0 PRP2 0x0 00:22:17.964 [2024-12-09 05:16:43.497659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.497666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.964 [2024-12-09 05:16:43.497670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.964 [2024-12-09 05:16:43.497676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:8 PRP1 0x0 PRP2 0x0 00:22:17.964 [2024-12-09 05:16:43.497682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.497691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.964 [2024-12-09 05:16:43.497696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.964 [2024-12-09 05:16:43.497702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25032 len:8 PRP1 0x0 PRP2 0x0 00:22:17.964 [2024-12-09 05:16:43.497708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.497714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.964 [2024-12-09 05:16:43.497719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.964 [2024-12-09 05:16:43.497725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25040 len:8 PRP1 0x0 PRP2 0x0 00:22:17.964 [2024-12-09 05:16:43.497731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.497737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.964 [2024-12-09 05:16:43.497742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.964 [2024-12-09 05:16:43.497748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25048 len:8 PRP1 0x0 PRP2 0x0 00:22:17.964 [2024-12-09 05:16:43.497754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.497760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.964 [2024-12-09 05:16:43.497765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.964 [2024-12-09 05:16:43.497770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:8 PRP1 0x0 PRP2 0x0 00:22:17.964 [2024-12-09 05:16:43.497777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.497784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.964 [2024-12-09 05:16:43.497789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.964 [2024-12-09 05:16:43.497794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25064 len:8 PRP1 0x0 PRP2 0x0 00:22:17.964 [2024-12-09 05:16:43.497801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.497846] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:17.964 [2024-12-09 05:16:43.497869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.964 [2024-12-09 05:16:43.497877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.497884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.964 [2024-12-09 05:16:43.497891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.497898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.964 [2024-12-09 05:16:43.497904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.497911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.964 [2024-12-09 05:16:43.497917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.964 [2024-12-09 05:16:43.497926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:17.964 [2024-12-09 05:16:43.497950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x809370 (9): Bad file descriptor 00:22:17.964 [2024-12-09 05:16:43.501220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:17.965 [2024-12-09 05:16:43.531466] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:17.965 10553.80 IOPS, 41.23 MiB/s [2024-12-09T04:16:54.611Z] 10586.00 IOPS, 41.35 MiB/s [2024-12-09T04:16:54.611Z] 10607.14 IOPS, 41.43 MiB/s [2024-12-09T04:16:54.611Z] 10655.25 IOPS, 41.62 MiB/s [2024-12-09T04:16:54.611Z] 10676.78 IOPS, 41.71 MiB/s [2024-12-09T04:16:54.611Z] [2024-12-09 05:16:47.916925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.916959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.916973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.916981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.916990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.965 [2024-12-09 05:16:47.917323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.965 [2024-12-09 05:16:47.917338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.965 [2024-12-09 05:16:47.917353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.965 [2024-12-09 05:16:47.917367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.965 [2024-12-09 05:16:47.917382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.965 [2024-12-09 05:16:47.917396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.965 [2024-12-09 05:16:47.917411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.965 [2024-12-09 05:16:47.917426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.965 [2024-12-09 05:16:47.917440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.965 [2024-12-09 05:16:47.917455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.965 [2024-12-09 05:16:47.917470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.965 [2024-12-09 05:16:47.917484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.965 [2024-12-09 05:16:47.917504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.965 [2024-12-09 05:16:47.917519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.965 [2024-12-09 05:16:47.917533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.965 [2024-12-09 05:16:47.917541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.917985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.917993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.918004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.918012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.918019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.918027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.918033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.918041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.918048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.966 [2024-12-09 05:16:47.918056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.966 [2024-12-09 05:16:47.918064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.967 [2024-12-09 05:16:47.918078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.967 [2024-12-09 05:16:47.918093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.967 [2024-12-09 05:16:47.918107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.967 [2024-12-09 05:16:47.918123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.967 [2024-12-09 05:16:47.918137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.967 [2024-12-09 05:16:47.918151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.967 [2024-12-09 05:16:47.918166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.967 [2024-12-09 05:16:47.918180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.967 [2024-12-09 05:16:47.918195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.967 [2024-12-09 05:16:47.918210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.967 [2024-12-09 05:16:47.918225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.967 [2024-12-09 05:16:47.918239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.967 [2024-12-09 05:16:47.918255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.967 [2024-12-09 05:16:47.918270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.967 [2024-12-09 05:16:47.918284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.967 [2024-12-09 05:16:47.918312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22360 len:8 PRP1 0x0 PRP2 0x0 00:22:17.967 [2024-12-09 05:16:47.918319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.967 [2024-12-09 05:16:47.918365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.967 [2024-12-09 05:16:47.918379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.967 [2024-12-09 05:16:47.918393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.967 [2024-12-09 05:16:47.918406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x809370 is same with the state(6) to be set 00:22:17.967 [2024-12-09 05:16:47.918566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.967 [2024-12-09 05:16:47.918572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.967 [2024-12-09 05:16:47.918578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:8 PRP1 0x0 PRP2 0x0 00:22:17.967 [2024-12-09 05:16:47.918585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.967 [2024-12-09 05:16:47.918598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.967 [2024-12-09 05:16:47.918604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22376 len:8 PRP1 0x0 PRP2 0x0 00:22:17.967 [2024-12-09 05:16:47.918610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.967 [2024-12-09 05:16:47.918624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.967 [2024-12-09 05:16:47.918631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22384 len:8 PRP1 0x0 PRP2 0x0 00:22:17.967 [2024-12-09 05:16:47.918638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.967 [2024-12-09 05:16:47.918650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.967 [2024-12-09 05:16:47.918655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22392 len:8 PRP1 0x0 PRP2 0x0 00:22:17.967 [2024-12-09 05:16:47.918662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.967 [2024-12-09 05:16:47.918674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.967 [2024-12-09 05:16:47.918679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:8 PRP1 0x0 PRP2 0x0 00:22:17.967 [2024-12-09 05:16:47.918685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.967 [2024-12-09 05:16:47.918697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.967 [2024-12-09 05:16:47.918702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22408 len:8 PRP1 0x0 PRP2 0x0 00:22:17.967 [2024-12-09 05:16:47.918709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.967 [2024-12-09 05:16:47.918721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.967 [2024-12-09 05:16:47.918726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22416 len:8 PRP1 0x0 PRP2 0x0 00:22:17.967 [2024-12-09 05:16:47.918733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.967 [2024-12-09 05:16:47.918744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.967 [2024-12-09 05:16:47.918749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22424 len:8 PRP1 0x0 PRP2 0x0 00:22:17.967 [2024-12-09 05:16:47.918756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.967 [2024-12-09 05:16:47.918767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.967 [2024-12-09 05:16:47.918773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:8 PRP1 0x0 PRP2 0x0 00:22:17.967 [2024-12-09 05:16:47.918779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.967 [2024-12-09 05:16:47.918791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.967 [2024-12-09 05:16:47.918796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22440 len:8 PRP1 0x0 PRP2 0x0 00:22:17.967 [2024-12-09 05:16:47.918802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.967 [2024-12-09 05:16:47.918820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.967 [2024-12-09 05:16:47.918825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22448 len:8 PRP1 0x0 PRP2 0x0 00:22:17.967 [2024-12-09 05:16:47.918832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.967 [2024-12-09 05:16:47.918838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.918843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.918849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22456 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.918855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.918862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.918867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.918872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.918879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.918885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.918890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.918896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22472 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.918902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.918908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.918913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.918919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22480 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.918925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.918932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.918937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.918942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22488 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.918948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.918955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.918960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.918965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.918973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.918980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.918985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.918991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22504 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.919003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.919013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.919018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.919024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22512 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.919030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.919037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.919042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.919047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22520 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.919053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.919061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.919066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.919071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.919077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.919084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.919089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.919095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22536 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.919101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.919108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.919112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.929518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.929532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.929544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.929550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.929558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21736 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.929566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.929576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.929583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.929590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21744 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.929600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.929609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.929615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.929623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21752 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.929635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.929645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.929652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.929659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21760 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.929668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.929677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.929684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.929691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21768 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.929700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.929709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.929716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.929723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21776 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.929732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.929741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.929748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.929755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21784 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.929764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.929774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.929781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.929788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.929797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.929806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.929813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.929820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21800 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.929828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.929837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.929844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.929851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21808 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.929860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.929869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.929876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.968 [2024-12-09 05:16:47.929885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21816 len:8 PRP1 0x0 PRP2 0x0 00:22:17.968 [2024-12-09 05:16:47.929894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.968 [2024-12-09 05:16:47.929903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.968 [2024-12-09 05:16:47.929910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.929917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.929926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.929935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.929942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.929949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21832 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.929958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.929967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.929973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.929981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21840 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.929989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.930003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.930010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.930017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21848 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.930026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.930035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.930041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.930049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21520 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.930057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.930066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.930073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.930080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21528 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.930089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.930098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.930105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.930112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21536 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.930121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.930132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.930139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.930146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21544 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.930155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.930164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.930171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.930178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21552 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.930187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.930196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.930203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.930210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21560 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.930219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.930227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.930234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.930241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21568 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.930250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.930259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.930266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.930273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21576 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.930282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.930291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.930297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.930305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21584 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.930313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.930322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.930329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.930336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21592 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.930345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.930354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.930361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.930368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21600 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.930378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.930388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.930394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.930401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21608 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.930410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.930419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.930427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.930434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21616 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.930443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.930452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.930459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.930466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21624 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.930475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.930484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.930491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.930498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21632 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.930506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.930515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.930522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.930530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21640 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.930538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.930547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.930554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.930562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21648 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.930570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.930579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.930586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.930593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21656 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.930602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.930611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.930618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.930627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.930636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.969 [2024-12-09 05:16:47.930644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.969 [2024-12-09 05:16:47.930651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.969 [2024-12-09 05:16:47.930659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21672 len:8 PRP1 0x0 PRP2 0x0 00:22:17.969 [2024-12-09 05:16:47.930667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.930676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.970 [2024-12-09 05:16:47.930683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.970 [2024-12-09 05:16:47.930690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21680 len:8 PRP1 0x0 PRP2 0x0 00:22:17.970 [2024-12-09 05:16:47.930699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.930708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.970 [2024-12-09 05:16:47.930714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.970 [2024-12-09 05:16:47.930721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21688 len:8 PRP1 0x0 PRP2 0x0 00:22:17.970 [2024-12-09 05:16:47.930730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.930739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.970 [2024-12-09 05:16:47.930746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.970 [2024-12-09 05:16:47.930753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:8 PRP1 0x0 PRP2 0x0 00:22:17.970 [2024-12-09 05:16:47.930761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.930770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.970 [2024-12-09 05:16:47.930777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.970 [2024-12-09 05:16:47.930784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21704 len:8 PRP1 0x0 PRP2 0x0 00:22:17.970 [2024-12-09 05:16:47.930793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.930802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.970 [2024-12-09 05:16:47.930808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.970 [2024-12-09 05:16:47.930816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:8 PRP1 0x0 PRP2 0x0 00:22:17.970 [2024-12-09 05:16:47.930825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.930833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.970 [2024-12-09 05:16:47.930840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.970 [2024-12-09 05:16:47.930847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21864 len:8 PRP1 0x0 PRP2 0x0 00:22:17.970 [2024-12-09 05:16:47.930856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.930865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.970 [2024-12-09 05:16:47.930875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.970 [2024-12-09 05:16:47.930882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21872 len:8 PRP1 0x0 PRP2 0x0 00:22:17.970 [2024-12-09 05:16:47.930891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.930900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.970 [2024-12-09 05:16:47.930906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.970 [2024-12-09 05:16:47.930914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21880 len:8 PRP1 0x0 PRP2 0x0 00:22:17.970 [2024-12-09 05:16:47.930922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.930932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.970 [2024-12-09 05:16:47.930938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.970 [2024-12-09 05:16:47.930946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:8 PRP1 0x0 PRP2 0x0 00:22:17.970 [2024-12-09 05:16:47.930954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.930963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.970 [2024-12-09 05:16:47.930970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.970 [2024-12-09 05:16:47.930977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21896 len:8 PRP1 0x0 PRP2 0x0 00:22:17.970 [2024-12-09 05:16:47.930985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.930995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.970 [2024-12-09 05:16:47.931005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.970 [2024-12-09 05:16:47.931012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21904 len:8 PRP1 0x0 PRP2 0x0 00:22:17.970 [2024-12-09 05:16:47.931021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.931030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.970 [2024-12-09 05:16:47.931037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.970 [2024-12-09 05:16:47.931044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21912 len:8 PRP1 0x0 PRP2 0x0 00:22:17.970 [2024-12-09 05:16:47.931053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.931062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.970 [2024-12-09 05:16:47.931069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.970 [2024-12-09 05:16:47.931076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:8 PRP1 0x0 PRP2 0x0 00:22:17.970 [2024-12-09 05:16:47.931084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.931093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.970 [2024-12-09 05:16:47.931100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.970 [2024-12-09 05:16:47.931107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21928 len:8 PRP1 0x0 PRP2 0x0 00:22:17.970 [2024-12-09 05:16:47.931116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.931127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.970 [2024-12-09 05:16:47.931134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.970 [2024-12-09 05:16:47.931142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21936 len:8 PRP1 0x0 PRP2 0x0 00:22:17.970 [2024-12-09 05:16:47.931150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.931159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.970 [2024-12-09 05:16:47.931166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.970 [2024-12-09 05:16:47.931174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21944 len:8 PRP1 0x0 PRP2 0x0 00:22:17.970 [2024-12-09 05:16:47.931182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.931191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.970 [2024-12-09 05:16:47.931198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.970 [2024-12-09 05:16:47.931206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:8 PRP1 0x0 PRP2 0x0 00:22:17.970 [2024-12-09 05:16:47.931214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.931223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.970 [2024-12-09 05:16:47.931230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.970 [2024-12-09 05:16:47.931237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21960 len:8 PRP1 0x0 PRP2 0x0 00:22:17.970 [2024-12-09 05:16:47.931246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.931254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.970 [2024-12-09 05:16:47.931261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.970 [2024-12-09 05:16:47.931269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21968 len:8 PRP1 0x0 PRP2 0x0 00:22:17.970 [2024-12-09 05:16:47.931277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.931286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.970 [2024-12-09 05:16:47.931293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.970 [2024-12-09 05:16:47.931300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21976 len:8 PRP1 0x0 PRP2 0x0 00:22:17.970 [2024-12-09 05:16:47.931308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.931317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.970 [2024-12-09 05:16:47.931324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.970 [2024-12-09 05:16:47.931332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:8 PRP1 0x0 PRP2 0x0 00:22:17.970 [2024-12-09 05:16:47.931340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.970 [2024-12-09 05:16:47.931349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.931355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.931363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21992 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.931373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.931382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.931388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.931396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22000 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.931405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.931414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.931420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.931428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22008 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.931436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.931445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.931454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.931462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.931471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.931480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.931487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.931495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22024 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.931504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.931513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.931520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.931527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22032 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.931536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.931545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.931552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.931559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22040 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.931568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.931577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.931584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.931591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.931600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.931609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.931615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.931625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22056 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.931633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.931643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.931650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.931657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22064 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.931666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.931675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.931682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.931690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22072 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.931698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.931707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.931714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.931721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.931729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.931738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.938283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.938297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22088 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.938308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.938318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.938326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.938334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22096 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.938343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.938352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.938359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.938367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22104 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.938376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.938385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.938392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.938399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.938408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.938418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.938427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.938435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22120 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.938444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.938453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.938460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.938468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22128 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.938477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.938487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.938493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.938501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22136 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.938510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.938519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.938526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.938534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.938543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.938552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.938559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.938567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22152 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.938576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.938585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.938592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.938600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22160 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.938608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.938618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.971 [2024-12-09 05:16:47.938625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.971 [2024-12-09 05:16:47.938632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22168 len:8 PRP1 0x0 PRP2 0x0 00:22:17.971 [2024-12-09 05:16:47.938642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.971 [2024-12-09 05:16:47.938651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.938658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.938666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.938674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.938686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.938693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.938701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22184 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.938710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.938719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.938727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.938734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22192 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.938743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.938752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.938759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.938767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22200 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.938776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.938785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.938792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.938799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.938808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.938818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.938824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.938832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22216 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.938841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.938850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.938857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.938865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22224 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.938873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.938883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.938889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.938897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22232 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.938906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.938916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.938922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.938930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.938941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.938950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.938958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.938965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22248 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.938974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.938983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.938990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.939003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22256 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.939012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.939022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.939028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.939036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22264 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.939045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.939054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.939061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.939069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.939078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.939087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.939095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.939102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22280 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.939111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.939120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.939127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.939135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22288 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.939144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.939153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.939160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.939168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22296 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.939177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.939186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.939195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.939203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.939212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.939221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.939228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.939236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22312 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.939244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.939254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.939261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.939269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22320 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.939278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.939287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.939294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.939301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22328 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.939310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.939320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.939327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.939334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.939343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.939352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.939359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.939367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22344 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.939376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.939385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.939392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.939400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22352 len:8 PRP1 0x0 PRP2 0x0 00:22:17.972 [2024-12-09 05:16:47.939408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.972 [2024-12-09 05:16:47.939418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.972 [2024-12-09 05:16:47.939425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.972 [2024-12-09 05:16:47.939433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21712 len:8 PRP1 0x0 PRP2 0x0 00:22:17.973 [2024-12-09 05:16:47.939441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.973 [2024-12-09 05:16:47.939455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.973 [2024-12-09 05:16:47.939463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.973 [2024-12-09 05:16:47.939470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21720 len:8 PRP1 0x0 PRP2 0x0 00:22:17.973 [2024-12-09 05:16:47.939479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.973 [2024-12-09 05:16:47.939488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.973 [2024-12-09 05:16:47.939495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.973 [2024-12-09 05:16:47.939503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22360 len:8 PRP1 0x0 PRP2 0x0 00:22:17.973 [2024-12-09 05:16:47.939512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.973 [2024-12-09 05:16:47.939564] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:17.973 [2024-12-09 05:16:47.939577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:17.973 [2024-12-09 05:16:47.939616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x809370 (9): Bad file descriptor 00:22:17.973 [2024-12-09 05:16:47.943686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:17.973 [2024-12-09 05:16:48.102189] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:17.973 10479.40 IOPS, 40.94 MiB/s [2024-12-09T04:16:54.619Z] 10503.73 IOPS, 41.03 MiB/s [2024-12-09T04:16:54.619Z] 10531.50 IOPS, 41.14 MiB/s [2024-12-09T04:16:54.619Z] 10534.62 IOPS, 41.15 MiB/s [2024-12-09T04:16:54.619Z] 10537.86 IOPS, 41.16 MiB/s [2024-12-09T04:16:54.619Z] 10544.13 IOPS, 41.19 MiB/s 00:22:17.973 Latency(us) 00:22:17.973 [2024-12-09T04:16:54.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.973 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:17.973 Verification LBA range: start 0x0 length 0x4000 00:22:17.973 NVMe0n1 : 15.05 10512.60 41.06 680.95 0.00 11382.33 427.41 40575.33 00:22:17.973 [2024-12-09T04:16:54.619Z] =================================================================================================================== 00:22:17.973 [2024-12-09T04:16:54.619Z] Total : 10512.60 41.06 680.95 0.00 11382.33 427.41 40575.33 00:22:17.973 Received shutdown signal, test time was about 15.000000 seconds 00:22:17.973 00:22:17.973 Latency(us) 00:22:17.973 [2024-12-09T04:16:54.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.973 [2024-12-09T04:16:54.619Z] =================================================================================================================== 00:22:17.973 [2024-12-09T04:16:54.619Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:17.973 05:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:17.973 05:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:17.973 05:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:17.973 05:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3677754 00:22:17.973 05:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3677754 /var/tmp/bdevperf.sock 00:22:17.973 05:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:17.973 05:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3677754 ']' 00:22:17.973 05:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:17.973 05:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:17.973 05:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:17.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:17.973 05:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:17.973 05:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:17.973 05:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:17.973 05:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:17.973 05:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:17.973 [2024-12-09 05:16:54.488236] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:17.973 05:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:18.232 [2024-12-09 05:16:54.680799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:18.232 05:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:18.489 NVMe0n1 00:22:18.489 05:16:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:18.794 00:22:18.794 05:16:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:19.359 00:22:19.359 05:16:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:19.359 05:16:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:19.359 05:16:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:19.616 05:16:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:22.899 05:16:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:22.899 05:16:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:22.899 05:16:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3678658 00:22:22.899 05:16:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:22.899 05:16:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3678658 00:22:24.274 { 00:22:24.274 "results": [ 00:22:24.274 { 00:22:24.274 "job": "NVMe0n1", 00:22:24.274 "core_mask": "0x1", 00:22:24.274 "workload": "verify", 00:22:24.274 "status": "finished", 00:22:24.274 "verify_range": { 00:22:24.274 "start": 0, 00:22:24.274 "length": 16384 00:22:24.274 }, 00:22:24.274 "queue_depth": 128, 00:22:24.274 "io_size": 4096, 00:22:24.274 "runtime": 1.004294, 00:22:24.274 "iops": 10716.981282373488, 00:22:24.274 "mibps": 41.86320813427144, 00:22:24.274 "io_failed": 0, 00:22:24.274 "io_timeout": 0, 00:22:24.275 "avg_latency_us": 11894.427329700382, 00:22:24.275 "min_latency_us": 580.5634782608696, 00:22:24.275 "max_latency_us": 9972.869565217392 00:22:24.275 } 00:22:24.275 ], 00:22:24.275 "core_count": 1 00:22:24.275 } 00:22:24.275 05:17:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:24.275 [2024-12-09 05:16:54.111568] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:22:24.275 [2024-12-09 05:16:54.111619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3677754 ] 00:22:24.275 [2024-12-09 05:16:54.177315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.275 [2024-12-09 05:16:54.215554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.275 [2024-12-09 05:16:56.172839] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:24.275 [2024-12-09 05:16:56.172885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:24.275 [2024-12-09 05:16:56.172897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.275 [2024-12-09 05:16:56.172907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:24.275 [2024-12-09 05:16:56.172913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.275 [2024-12-09 05:16:56.172921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:24.275 [2024-12-09 05:16:56.172927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.275 [2024-12-09 05:16:56.172934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:24.275 [2024-12-09 05:16:56.172941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.275 [2024-12-09 05:16:56.172948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:22:24.275 [2024-12-09 05:16:56.172972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:22:24.275 [2024-12-09 05:16:56.172986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bb370 (9): Bad file descriptor 00:22:24.275 [2024-12-09 05:16:56.183628] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:22:24.275 Running I/O for 1 seconds... 00:22:24.275 10627.00 IOPS, 41.51 MiB/s 00:22:24.275 Latency(us) 00:22:24.275 [2024-12-09T04:17:00.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.275 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:24.275 Verification LBA range: start 0x0 length 0x4000 00:22:24.275 NVMe0n1 : 1.00 10716.98 41.86 0.00 0.00 11894.43 580.56 9972.87 00:22:24.275 [2024-12-09T04:17:00.921Z] =================================================================================================================== 00:22:24.275 [2024-12-09T04:17:00.921Z] Total : 10716.98 41.86 0.00 0.00 11894.43 580.56 9972.87 00:22:24.275 05:17:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:24.275 05:17:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:24.275 05:17:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:24.533 05:17:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:24.533 05:17:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:24.533 05:17:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:24.791 05:17:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:28.212 05:17:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:28.212 05:17:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:28.212 05:17:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3677754 00:22:28.212 05:17:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3677754 ']' 00:22:28.212 05:17:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3677754 00:22:28.212 05:17:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:28.212 05:17:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.212 05:17:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3677754 00:22:28.212 05:17:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:28.212 05:17:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:28.212 05:17:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3677754' 00:22:28.212 killing process with pid 3677754 00:22:28.212 05:17:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3677754 00:22:28.212 05:17:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3677754 00:22:28.212 05:17:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:28.212 05:17:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:28.470 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:28.470 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:28.470 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:28.470 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:28.470 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:28.470 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:28.470 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:28.470 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:28.470 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:28.470 rmmod nvme_tcp 00:22:28.470 rmmod nvme_fabrics 00:22:28.470 rmmod nvme_keyring 00:22:28.470 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:28.470 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:28.470 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:28.470 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3674730 ']' 00:22:28.470 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3674730 00:22:28.470 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3674730 ']' 00:22:28.470 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3674730 00:22:28.470 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:28.470 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.470 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3674730 00:22:28.727 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:28.727 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:28.727 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3674730' 00:22:28.727 killing process with pid 3674730 00:22:28.728 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3674730 00:22:28.728 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3674730 00:22:28.985 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:28.985 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:28.985 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:28.985 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:22:28.985 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:22:28.985 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:22:28.985 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:28.985 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:28.985 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:28.985 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.985 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.985 05:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.884 05:17:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:30.884 00:22:30.884 real 0m37.204s 00:22:30.884 user 1m59.443s 00:22:30.884 sys 0m7.543s 00:22:30.884 05:17:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:30.884 05:17:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:30.884 ************************************ 00:22:30.884 END TEST nvmf_failover 00:22:30.884 ************************************ 00:22:30.884 05:17:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:30.884 05:17:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:30.884 05:17:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:30.884 05:17:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.884 ************************************ 00:22:30.884 START TEST nvmf_host_discovery 00:22:30.884 ************************************ 00:22:30.884 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:31.143 * Looking for test storage... 00:22:31.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:31.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.143 --rc genhtml_branch_coverage=1 00:22:31.143 --rc genhtml_function_coverage=1 00:22:31.143 --rc genhtml_legend=1 00:22:31.143 --rc geninfo_all_blocks=1 00:22:31.143 --rc geninfo_unexecuted_blocks=1 00:22:31.143 00:22:31.143 ' 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:31.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.143 --rc genhtml_branch_coverage=1 00:22:31.143 --rc genhtml_function_coverage=1 00:22:31.143 --rc genhtml_legend=1 00:22:31.143 --rc geninfo_all_blocks=1 00:22:31.143 --rc geninfo_unexecuted_blocks=1 00:22:31.143 00:22:31.143 ' 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:31.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.143 --rc genhtml_branch_coverage=1 00:22:31.143 --rc genhtml_function_coverage=1 00:22:31.143 --rc genhtml_legend=1 00:22:31.143 --rc geninfo_all_blocks=1 00:22:31.143 --rc geninfo_unexecuted_blocks=1 00:22:31.143 00:22:31.143 ' 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:31.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.143 --rc genhtml_branch_coverage=1 00:22:31.143 --rc genhtml_function_coverage=1 00:22:31.143 --rc genhtml_legend=1 00:22:31.143 --rc geninfo_all_blocks=1 00:22:31.143 --rc geninfo_unexecuted_blocks=1 00:22:31.143 00:22:31.143 ' 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.143 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:31.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:22:31.144 05:17:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:36.411 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:36.411 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:36.412 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:36.412 Found net devices under 0000:86:00.0: cvl_0_0 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:36.412 Found net devices under 0000:86:00.1: cvl_0_1 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:36.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:36.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:22:36.412 00:22:36.412 --- 10.0.0.2 ping statistics --- 00:22:36.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.412 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:36.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:36.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:22:36.412 00:22:36.412 --- 10.0.0.1 ping statistics --- 00:22:36.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.412 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3682906 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3682906 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3682906 ']' 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:36.412 [2024-12-09 05:17:12.582666] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:22:36.412 [2024-12-09 05:17:12.582711] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.412 [2024-12-09 05:17:12.651373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.412 [2024-12-09 05:17:12.692461] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.412 [2024-12-09 05:17:12.692494] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.412 [2024-12-09 05:17:12.692500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.412 [2024-12-09 05:17:12.692507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.412 [2024-12-09 05:17:12.692512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.412 [2024-12-09 05:17:12.693065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.412 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.413 [2024-12-09 05:17:12.829590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.413 [2024-12-09 05:17:12.837762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.413 null0 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.413 null1 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3682925 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3682925 /tmp/host.sock 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3682925 ']' 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:36.413 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:36.413 05:17:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.413 [2024-12-09 05:17:12.901635] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:22:36.413 [2024-12-09 05:17:12.901675] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3682925 ] 00:22:36.413 [2024-12-09 05:17:12.965710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.413 [2024-12-09 05:17:13.006346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.670 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:36.928 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:36.928 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:36.928 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.928 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:36.928 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.928 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:36.928 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.928 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:36.928 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:36.928 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.928 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:36.928 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:36.928 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.928 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.929 [2024-12-09 05:17:13.407224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:36.929 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.186 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.186 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:22:37.186 05:17:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:37.750 [2024-12-09 05:17:14.167115] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:37.750 [2024-12-09 05:17:14.167134] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:37.750 [2024-12-09 05:17:14.167152] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:37.750 [2024-12-09 05:17:14.253412] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:37.750 [2024-12-09 05:17:14.315072] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:22:37.750 [2024-12-09 05:17:14.315792] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x14f0e30:1 started. 00:22:37.750 [2024-12-09 05:17:14.317178] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:37.750 [2024-12-09 05:17:14.317193] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:37.750 [2024-12-09 05:17:14.324826] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x14f0e30 was disconnected and freed. delete nvme_qpair. 00:22:38.006 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:38.006 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:38.006 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:38.006 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:38.006 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:38.006 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:38.006 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.006 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.006 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:38.006 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.261 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.261 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:38.261 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:38.261 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:38.261 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:38.261 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:38.261 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:38.261 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:38.261 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.261 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:38.261 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.261 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.262 05:17:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:38.518 [2024-12-09 05:17:14.993745] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x14f12f0:1 started. 00:22:38.518 05:17:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.518 05:17:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:38.518 05:17:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:38.518 05:17:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:38.518 05:17:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:38.518 05:17:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:38.518 05:17:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:38.518 05:17:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:38.518 05:17:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:38.518 05:17:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:38.518 05:17:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:38.518 05:17:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:38.518 05:17:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:38.518 05:17:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.518 05:17:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.519 [2024-12-09 05:17:15.037594] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x14f12f0 was disconnected and freed. delete nvme_qpair. 00:22:38.519 05:17:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.519 05:17:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:38.519 05:17:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:38.519 05:17:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:38.519 05:17:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:39.448 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:39.448 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:39.448 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:39.448 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:39.448 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:39.448 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.448 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.448 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.705 [2024-12-09 05:17:16.126655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:39.705 [2024-12-09 05:17:16.127408] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:39.705 [2024-12-09 05:17:16.127429] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.705 [2024-12-09 05:17:16.213679] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.705 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:39.706 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.706 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:39.706 05:17:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:39.706 [2024-12-09 05:17:16.314553] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:22:39.706 [2024-12-09 05:17:16.314588] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:39.706 [2024-12-09 05:17:16.314597] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:39.706 [2024-12-09 05:17:16.314602] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:40.637 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:40.637 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.896 [2024-12-09 05:17:17.382439] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:40.896 [2024-12-09 05:17:17.382460] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:40.896 [2024-12-09 05:17:17.390787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.896 [2024-12-09 05:17:17.390804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.896 [2024-12-09 05:17:17.390812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.896 [2024-12-09 05:17:17.390819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.896 [2024-12-09 05:17:17.390842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.896 [2024-12-09 05:17:17.390849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.896 [2024-12-09 05:17:17.390856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.896 [2024-12-09 05:17:17.390863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.896 [2024-12-09 05:17:17.390870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c1390 is same with the state(6) to be set 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:40.896 [2024-12-09 05:17:17.400801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c1390 (9): Bad file descriptor 00:22:40.896 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.896 [2024-12-09 05:17:17.410835] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:40.896 [2024-12-09 05:17:17.410848] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:40.896 [2024-12-09 05:17:17.410853] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:40.896 [2024-12-09 05:17:17.410859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:40.896 [2024-12-09 05:17:17.410877] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:40.896 [2024-12-09 05:17:17.411108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:40.896 [2024-12-09 05:17:17.411125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c1390 with addr=10.0.0.2, port=4420 00:22:40.896 [2024-12-09 05:17:17.411134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c1390 is same with the state(6) to be set 00:22:40.897 [2024-12-09 05:17:17.411147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c1390 (9): Bad file descriptor 00:22:40.897 [2024-12-09 05:17:17.411157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:40.897 [2024-12-09 05:17:17.411163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:40.897 [2024-12-09 05:17:17.411175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:40.897 [2024-12-09 05:17:17.411182] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:40.897 [2024-12-09 05:17:17.411188] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:40.897 [2024-12-09 05:17:17.411193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:40.897 [2024-12-09 05:17:17.420909] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:40.897 [2024-12-09 05:17:17.420920] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:40.897 [2024-12-09 05:17:17.420924] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:40.897 [2024-12-09 05:17:17.420928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:40.897 [2024-12-09 05:17:17.420942] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:40.897 [2024-12-09 05:17:17.421049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:40.897 [2024-12-09 05:17:17.421061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c1390 with addr=10.0.0.2, port=4420 00:22:40.897 [2024-12-09 05:17:17.421069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c1390 is same with the state(6) to be set 00:22:40.897 [2024-12-09 05:17:17.421079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c1390 (9): Bad file descriptor 00:22:40.897 [2024-12-09 05:17:17.421089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:40.897 [2024-12-09 05:17:17.421095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:40.897 [2024-12-09 05:17:17.421102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:40.897 [2024-12-09 05:17:17.421108] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:40.897 [2024-12-09 05:17:17.421113] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:40.897 [2024-12-09 05:17:17.421117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:40.897 [2024-12-09 05:17:17.430974] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:40.897 [2024-12-09 05:17:17.430988] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:40.897 [2024-12-09 05:17:17.430992] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:40.897 [2024-12-09 05:17:17.430996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:40.897 [2024-12-09 05:17:17.431015] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:40.897 [2024-12-09 05:17:17.431280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:40.897 [2024-12-09 05:17:17.431293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c1390 with addr=10.0.0.2, port=4420 00:22:40.897 [2024-12-09 05:17:17.431301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c1390 is same with the state(6) to be set 00:22:40.897 [2024-12-09 05:17:17.431312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c1390 (9): Bad file descriptor 00:22:40.897 [2024-12-09 05:17:17.431322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:40.897 [2024-12-09 05:17:17.431333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:40.897 [2024-12-09 05:17:17.431340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:40.897 [2024-12-09 05:17:17.431346] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:40.897 [2024-12-09 05:17:17.431351] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:40.897 [2024-12-09 05:17:17.431354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:40.897 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.897 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:40.897 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:40.897 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:40.897 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:40.897 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:40.897 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:40.897 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:40.897 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.897 [2024-12-09 05:17:17.441047] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:40.897 [2024-12-09 05:17:17.441058] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:40.897 [2024-12-09 05:17:17.441063] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:40.897 [2024-12-09 05:17:17.441067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:40.897 [2024-12-09 05:17:17.441080] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:40.897 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.897 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.897 [2024-12-09 05:17:17.441322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:40.897 [2024-12-09 05:17:17.441334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c1390 with addr=10.0.0.2, port=4420 00:22:40.897 [2024-12-09 05:17:17.441342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c1390 is same with the state(6) to be set 00:22:40.897 [2024-12-09 05:17:17.441352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c1390 (9): Bad file descriptor 00:22:40.897 [2024-12-09 05:17:17.441362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:40.897 [2024-12-09 05:17:17.441369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:40.897 [2024-12-09 05:17:17.441376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:40.897 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:40.897 [2024-12-09 05:17:17.441382] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:40.897 [2024-12-09 05:17:17.441388] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:40.897 [2024-12-09 05:17:17.441392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:40.897 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:40.897 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:40.897 [2024-12-09 05:17:17.451111] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:40.897 [2024-12-09 05:17:17.451126] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:40.897 [2024-12-09 05:17:17.451130] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:40.897 [2024-12-09 05:17:17.451134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:40.897 [2024-12-09 05:17:17.451149] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:40.897 [2024-12-09 05:17:17.451401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:40.897 [2024-12-09 05:17:17.451414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c1390 with addr=10.0.0.2, port=4420 00:22:40.897 [2024-12-09 05:17:17.451422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c1390 is same with the state(6) to be set 00:22:40.897 [2024-12-09 05:17:17.451433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c1390 (9): Bad file descriptor 00:22:40.897 [2024-12-09 05:17:17.451442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:40.897 [2024-12-09 05:17:17.451448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:40.897 [2024-12-09 05:17:17.451455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:40.897 [2024-12-09 05:17:17.451461] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:40.897 [2024-12-09 05:17:17.451465] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:40.897 [2024-12-09 05:17:17.451469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:40.897 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.897 [2024-12-09 05:17:17.461180] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:40.897 [2024-12-09 05:17:17.461194] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:40.897 [2024-12-09 05:17:17.461198] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:40.897 [2024-12-09 05:17:17.461202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:40.897 [2024-12-09 05:17:17.461216] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:40.897 [2024-12-09 05:17:17.461329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:40.897 [2024-12-09 05:17:17.461342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c1390 with addr=10.0.0.2, port=4420 00:22:40.897 [2024-12-09 05:17:17.461349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c1390 is same with the state(6) to be set 00:22:40.897 [2024-12-09 05:17:17.461360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c1390 (9): Bad file descriptor 00:22:40.897 [2024-12-09 05:17:17.461370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:40.898 [2024-12-09 05:17:17.461376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:40.898 [2024-12-09 05:17:17.461383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:40.898 [2024-12-09 05:17:17.461392] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:40.898 [2024-12-09 05:17:17.461396] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:40.898 [2024-12-09 05:17:17.461400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:40.898 [2024-12-09 05:17:17.471247] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:40.898 [2024-12-09 05:17:17.471261] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:40.898 [2024-12-09 05:17:17.471266] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:40.898 [2024-12-09 05:17:17.471269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:40.898 [2024-12-09 05:17:17.471283] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:40.898 [2024-12-09 05:17:17.471538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:40.898 [2024-12-09 05:17:17.471552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c1390 with addr=10.0.0.2, port=4420 00:22:40.898 [2024-12-09 05:17:17.471559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c1390 is same with the state(6) to be set 00:22:40.898 [2024-12-09 05:17:17.471570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c1390 (9): Bad file descriptor 00:22:40.898 [2024-12-09 05:17:17.471579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:40.898 [2024-12-09 05:17:17.471585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:40.898 [2024-12-09 05:17:17.471592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:40.898 [2024-12-09 05:17:17.471597] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:40.898 [2024-12-09 05:17:17.471601] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:40.898 [2024-12-09 05:17:17.471605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:40.898 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:40.898 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:40.898 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:40.898 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:40.898 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:40.898 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:40.898 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:40.898 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:40.898 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:40.898 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.898 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.898 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:40.898 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:40.898 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:40.898 [2024-12-09 05:17:17.481315] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:40.898 [2024-12-09 05:17:17.481326] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:40.898 [2024-12-09 05:17:17.481330] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:40.898 [2024-12-09 05:17:17.481334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:40.898 [2024-12-09 05:17:17.481346] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:40.898 [2024-12-09 05:17:17.481525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:40.898 [2024-12-09 05:17:17.481537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c1390 with addr=10.0.0.2, port=4420 00:22:40.898 [2024-12-09 05:17:17.481544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c1390 is same with the state(6) to be set 00:22:40.898 [2024-12-09 05:17:17.481554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c1390 (9): Bad file descriptor 00:22:40.898 [2024-12-09 05:17:17.481564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:40.898 [2024-12-09 05:17:17.481570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:40.898 [2024-12-09 05:17:17.481577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:40.898 [2024-12-09 05:17:17.481583] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:40.898 [2024-12-09 05:17:17.481588] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:40.898 [2024-12-09 05:17:17.481591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:40.898 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.898 [2024-12-09 05:17:17.491378] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:40.898 [2024-12-09 05:17:17.491392] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:40.898 [2024-12-09 05:17:17.491398] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:40.898 [2024-12-09 05:17:17.491403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:40.898 [2024-12-09 05:17:17.491418] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:40.898 [2024-12-09 05:17:17.491594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:40.898 [2024-12-09 05:17:17.491606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c1390 with addr=10.0.0.2, port=4420 00:22:40.898 [2024-12-09 05:17:17.491613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c1390 is same with the state(6) to be set 00:22:40.898 [2024-12-09 05:17:17.491623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c1390 (9): Bad file descriptor 00:22:40.898 [2024-12-09 05:17:17.491633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:40.898 [2024-12-09 05:17:17.491639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:40.898 [2024-12-09 05:17:17.491646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:40.898 [2024-12-09 05:17:17.491651] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:40.898 [2024-12-09 05:17:17.491659] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:40.898 [2024-12-09 05:17:17.491663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:40.898 [2024-12-09 05:17:17.501450] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:40.898 [2024-12-09 05:17:17.501463] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:40.898 [2024-12-09 05:17:17.501467] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:40.898 [2024-12-09 05:17:17.501470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:40.898 [2024-12-09 05:17:17.501483] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:40.898 [2024-12-09 05:17:17.501662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:40.898 [2024-12-09 05:17:17.501678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c1390 with addr=10.0.0.2, port=4420 00:22:40.898 [2024-12-09 05:17:17.501686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c1390 is same with the state(6) to be set 00:22:40.898 [2024-12-09 05:17:17.501697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c1390 (9): Bad file descriptor 00:22:40.898 [2024-12-09 05:17:17.501707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:40.898 [2024-12-09 05:17:17.501713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:40.898 [2024-12-09 05:17:17.501719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:40.898 [2024-12-09 05:17:17.501725] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:40.898 [2024-12-09 05:17:17.501729] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:40.898 [2024-12-09 05:17:17.501733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:40.898 [2024-12-09 05:17:17.508845] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:40.898 [2024-12-09 05:17:17.508867] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:40.898 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:22:40.898 05:17:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.271 05:17:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.205 [2024-12-09 05:17:19.840516] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:43.205 [2024-12-09 05:17:19.840533] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:43.205 [2024-12-09 05:17:19.840544] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:43.464 [2024-12-09 05:17:19.926815] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:43.464 [2024-12-09 05:17:20.025498] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:22:43.464 [2024-12-09 05:17:20.026146] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x14fb4c0:1 started. 00:22:43.464 [2024-12-09 05:17:20.027796] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:43.464 [2024-12-09 05:17:20.027822] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.464 [2024-12-09 05:17:20.029527] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x14fb4c0 was disconnected and freed. delete nvme_qpair. 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.464 request: 00:22:43.464 { 00:22:43.464 "name": "nvme", 00:22:43.464 "trtype": "tcp", 00:22:43.464 "traddr": "10.0.0.2", 00:22:43.464 "adrfam": "ipv4", 00:22:43.464 "trsvcid": "8009", 00:22:43.464 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:43.464 "wait_for_attach": true, 00:22:43.464 "method": "bdev_nvme_start_discovery", 00:22:43.464 "req_id": 1 00:22:43.464 } 00:22:43.464 Got JSON-RPC error response 00:22:43.464 response: 00:22:43.464 { 00:22:43.464 "code": -17, 00:22:43.464 "message": "File exists" 00:22:43.464 } 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.464 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.724 request: 00:22:43.724 { 00:22:43.724 "name": "nvme_second", 00:22:43.724 "trtype": "tcp", 00:22:43.724 "traddr": "10.0.0.2", 00:22:43.724 "adrfam": "ipv4", 00:22:43.724 "trsvcid": "8009", 00:22:43.724 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:43.724 "wait_for_attach": true, 00:22:43.724 "method": "bdev_nvme_start_discovery", 00:22:43.724 "req_id": 1 00:22:43.724 } 00:22:43.724 Got JSON-RPC error response 00:22:43.724 response: 00:22:43.724 { 00:22:43.724 "code": -17, 00:22:43.724 "message": "File exists" 00:22:43.724 } 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.724 05:17:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:44.664 [2024-12-09 05:17:21.271594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:44.664 [2024-12-09 05:17:21.271623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14db180 with addr=10.0.0.2, port=8010 00:22:44.664 [2024-12-09 05:17:21.271640] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:44.664 [2024-12-09 05:17:21.271648] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:44.664 [2024-12-09 05:17:21.271654] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:46.033 [2024-12-09 05:17:22.274045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.033 [2024-12-09 05:17:22.274072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14db180 with addr=10.0.0.2, port=8010 00:22:46.033 [2024-12-09 05:17:22.274086] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:46.033 [2024-12-09 05:17:22.274092] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:46.033 [2024-12-09 05:17:22.274098] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:46.964 [2024-12-09 05:17:23.276185] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:46.964 request: 00:22:46.964 { 00:22:46.964 "name": "nvme_second", 00:22:46.964 "trtype": "tcp", 00:22:46.964 "traddr": "10.0.0.2", 00:22:46.964 "adrfam": "ipv4", 00:22:46.964 "trsvcid": "8010", 00:22:46.964 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:46.964 "wait_for_attach": false, 00:22:46.964 "attach_timeout_ms": 3000, 00:22:46.964 "method": "bdev_nvme_start_discovery", 00:22:46.964 "req_id": 1 00:22:46.964 } 00:22:46.964 Got JSON-RPC error response 00:22:46.964 response: 00:22:46.964 { 00:22:46.964 "code": -110, 00:22:46.964 "message": "Connection timed out" 00:22:46.964 } 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3682925 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:46.964 rmmod nvme_tcp 00:22:46.964 rmmod nvme_fabrics 00:22:46.964 rmmod nvme_keyring 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:22:46.964 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3682906 ']' 00:22:46.965 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3682906 00:22:46.965 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3682906 ']' 00:22:46.965 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3682906 00:22:46.965 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:22:46.965 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.965 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3682906 00:22:46.965 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:46.965 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:46.965 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3682906' 00:22:46.965 killing process with pid 3682906 00:22:46.965 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3682906 00:22:46.965 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3682906 00:22:47.222 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:47.222 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:47.222 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:47.222 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:22:47.222 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:22:47.222 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:47.222 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:22:47.222 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.222 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.222 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.222 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.222 05:17:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.126 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:49.126 00:22:49.126 real 0m18.196s 00:22:49.126 user 0m24.271s 00:22:49.126 sys 0m5.165s 00:22:49.126 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:49.126 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.126 ************************************ 00:22:49.126 END TEST nvmf_host_discovery 00:22:49.126 ************************************ 00:22:49.126 05:17:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:49.126 05:17:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:49.126 05:17:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:49.126 05:17:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.384 ************************************ 00:22:49.384 START TEST nvmf_host_multipath_status 00:22:49.384 ************************************ 00:22:49.384 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:49.384 * Looking for test storage... 00:22:49.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:49.384 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:49.384 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:22:49.384 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:49.384 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:49.384 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:49.384 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:49.384 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:49.384 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:22:49.384 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:22:49.384 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:22:49.384 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:22:49.384 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:22:49.384 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:22:49.384 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:22:49.384 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:49.384 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:22:49.384 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:22:49.384 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:49.384 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.384 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:49.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.385 --rc genhtml_branch_coverage=1 00:22:49.385 --rc genhtml_function_coverage=1 00:22:49.385 --rc genhtml_legend=1 00:22:49.385 --rc geninfo_all_blocks=1 00:22:49.385 --rc geninfo_unexecuted_blocks=1 00:22:49.385 00:22:49.385 ' 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:49.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.385 --rc genhtml_branch_coverage=1 00:22:49.385 --rc genhtml_function_coverage=1 00:22:49.385 --rc genhtml_legend=1 00:22:49.385 --rc geninfo_all_blocks=1 00:22:49.385 --rc geninfo_unexecuted_blocks=1 00:22:49.385 00:22:49.385 ' 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:49.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.385 --rc genhtml_branch_coverage=1 00:22:49.385 --rc genhtml_function_coverage=1 00:22:49.385 --rc genhtml_legend=1 00:22:49.385 --rc geninfo_all_blocks=1 00:22:49.385 --rc geninfo_unexecuted_blocks=1 00:22:49.385 00:22:49.385 ' 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:49.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.385 --rc genhtml_branch_coverage=1 00:22:49.385 --rc genhtml_function_coverage=1 00:22:49.385 --rc genhtml_legend=1 00:22:49.385 --rc geninfo_all_blocks=1 00:22:49.385 --rc geninfo_unexecuted_blocks=1 00:22:49.385 00:22:49.385 ' 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.385 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:49.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.386 05:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:54.652 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:54.652 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.652 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:54.653 Found net devices under 0000:86:00.0: cvl_0_0 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:54.653 Found net devices under 0000:86:00.1: cvl_0_1 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:54.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:22:54.653 00:22:54.653 --- 10.0.0.2 ping statistics --- 00:22:54.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.653 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:54.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:22:54.653 00:22:54.653 --- 10.0.0.1 ping statistics --- 00:22:54.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.653 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.653 05:17:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3688236 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3688236 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3688236 ']' 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:54.653 [2024-12-09 05:17:31.083253] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:22:54.653 [2024-12-09 05:17:31.083299] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.653 [2024-12-09 05:17:31.148144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:54.653 [2024-12-09 05:17:31.191920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.653 [2024-12-09 05:17:31.191954] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.653 [2024-12-09 05:17:31.191962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.653 [2024-12-09 05:17:31.191968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.653 [2024-12-09 05:17:31.191974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.653 [2024-12-09 05:17:31.193130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.653 [2024-12-09 05:17:31.193133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:54.653 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:54.910 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.910 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3688236 00:22:54.910 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:54.910 [2024-12-09 05:17:31.499470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.910 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:55.167 Malloc0 00:22:55.167 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:55.424 05:17:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:55.682 05:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:55.682 [2024-12-09 05:17:32.281508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.682 05:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:55.940 [2024-12-09 05:17:32.465952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:55.940 05:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3688487 00:22:55.941 05:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:55.941 05:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:55.941 05:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3688487 /var/tmp/bdevperf.sock 00:22:55.941 05:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3688487 ']' 00:22:55.941 05:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.941 05:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.941 05:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.941 05:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.941 05:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:56.199 05:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.199 05:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:56.199 05:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:56.456 05:17:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:56.714 Nvme0n1 00:22:56.714 05:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:57.279 Nvme0n1 00:22:57.279 05:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:57.279 05:17:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:59.177 05:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:59.177 05:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:59.435 05:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:59.693 05:17:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:00.626 05:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:00.626 05:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:00.626 05:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.626 05:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:00.888 05:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.888 05:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:00.888 05:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:00.888 05:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.888 05:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:00.888 05:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:01.146 05:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.146 05:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:01.146 05:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.146 05:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:01.146 05:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.146 05:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:01.404 05:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.404 05:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:01.405 05:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.405 05:17:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:01.663 05:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.663 05:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:01.663 05:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.663 05:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:01.921 05:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.921 05:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:01.921 05:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:02.179 05:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:02.179 05:17:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:03.553 05:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:03.553 05:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:03.553 05:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.553 05:17:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:03.553 05:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:03.553 05:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:03.553 05:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.553 05:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:03.811 05:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.811 05:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:03.811 05:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.811 05:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:03.811 05:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.811 05:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:03.811 05:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.811 05:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:04.069 05:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.069 05:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:04.069 05:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.069 05:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:04.327 05:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.327 05:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:04.327 05:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.327 05:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:04.584 05:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.584 05:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:04.584 05:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:04.842 05:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:04.842 05:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:06.214 05:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:06.214 05:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:06.214 05:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.214 05:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:06.214 05:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:06.214 05:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:06.214 05:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:06.214 05:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.472 05:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:06.472 05:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:06.472 05:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:06.472 05:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.472 05:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:06.729 05:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:06.729 05:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.729 05:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:06.729 05:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:06.729 05:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:06.729 05:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.729 05:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:06.987 05:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:06.987 05:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:06.987 05:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.987 05:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:07.244 05:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.244 05:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:07.244 05:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:07.503 05:17:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:07.761 05:17:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:08.693 05:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:08.693 05:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:08.693 05:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.693 05:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:08.950 05:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.950 05:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:08.950 05:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:08.950 05:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:09.208 05:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:09.208 05:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:09.208 05:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:09.208 05:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:09.208 05:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:09.208 05:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:09.208 05:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:09.208 05:17:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:09.466 05:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:09.466 05:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:09.466 05:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:09.466 05:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:09.723 05:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:09.723 05:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:09.723 05:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:09.723 05:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:09.981 05:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:09.981 05:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:09.981 05:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:10.239 05:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:10.239 05:17:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:11.613 05:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:11.613 05:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:11.613 05:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.613 05:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:11.613 05:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:11.613 05:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:11.613 05:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:11.613 05:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.613 05:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:11.613 05:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:11.613 05:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.613 05:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:11.871 05:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.871 05:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:11.871 05:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.871 05:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:12.130 05:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:12.130 05:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:12.130 05:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.130 05:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:12.388 05:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:12.388 05:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:12.389 05:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.389 05:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:12.389 05:17:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:12.389 05:17:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:12.389 05:17:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:12.647 05:17:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:12.905 05:17:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:13.838 05:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:13.838 05:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:13.838 05:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.838 05:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:14.096 05:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:14.096 05:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:14.096 05:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.096 05:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:14.353 05:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.354 05:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:14.354 05:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.354 05:17:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:14.683 05:17:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.683 05:17:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:14.683 05:17:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.683 05:17:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:14.683 05:17:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.683 05:17:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:14.683 05:17:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.683 05:17:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:15.005 05:17:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:15.005 05:17:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:15.005 05:17:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.005 05:17:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:15.263 05:17:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.263 05:17:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:15.263 05:17:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:15.263 05:17:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:15.521 05:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:15.778 05:17:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:16.710 05:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:16.710 05:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:16.710 05:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.710 05:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:16.967 05:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.967 05:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:16.967 05:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.967 05:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:17.224 05:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.224 05:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:17.224 05:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.224 05:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:17.481 05:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.481 05:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:17.481 05:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.481 05:17:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:17.481 05:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.481 05:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:17.481 05:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.481 05:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:17.739 05:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.739 05:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:17.739 05:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.739 05:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:17.997 05:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.997 05:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:17.997 05:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:18.255 05:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:18.521 05:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:19.456 05:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:19.456 05:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:19.456 05:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.456 05:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:19.713 05:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:19.713 05:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:19.713 05:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.713 05:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:19.713 05:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.713 05:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:19.713 05:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.713 05:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:19.970 05:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.970 05:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:19.970 05:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.970 05:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:20.228 05:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.228 05:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:20.228 05:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.228 05:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:20.484 05:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.484 05:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:20.484 05:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:20.484 05:17:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.742 05:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.742 05:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:20.742 05:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:20.742 05:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:20.999 05:17:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:22.373 05:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:22.373 05:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:22.373 05:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.373 05:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:22.373 05:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.373 05:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:22.373 05:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.373 05:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:22.373 05:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.373 05:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:22.373 05:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.373 05:17:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:22.630 05:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.630 05:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:22.630 05:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.630 05:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:22.887 05:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.887 05:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:22.887 05:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:22.887 05:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.145 05:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.145 05:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:23.145 05:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.145 05:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:23.404 05:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.404 05:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:23.404 05:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:23.404 05:18:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:23.662 05:18:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:24.593 05:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:24.593 05:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:24.593 05:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.593 05:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:24.851 05:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.851 05:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:24.851 05:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.851 05:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:25.108 05:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:25.108 05:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:25.108 05:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.108 05:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:25.365 05:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.365 05:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:25.365 05:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.365 05:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:25.629 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.629 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:25.629 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.629 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:25.629 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.629 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:25.629 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.629 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:25.891 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:25.891 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3688487 00:23:25.891 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3688487 ']' 00:23:25.891 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3688487 00:23:25.891 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:25.891 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:25.891 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3688487 00:23:25.891 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:25.891 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:25.891 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3688487' 00:23:25.891 killing process with pid 3688487 00:23:25.891 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3688487 00:23:25.891 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3688487 00:23:25.891 { 00:23:25.891 "results": [ 00:23:25.891 { 00:23:25.891 "job": "Nvme0n1", 00:23:25.891 "core_mask": "0x4", 00:23:25.891 "workload": "verify", 00:23:25.891 "status": "terminated", 00:23:25.891 "verify_range": { 00:23:25.891 "start": 0, 00:23:25.891 "length": 16384 00:23:25.891 }, 00:23:25.891 "queue_depth": 128, 00:23:25.891 "io_size": 4096, 00:23:25.891 "runtime": 28.656252, 00:23:25.891 "iops": 10095.283919195015, 00:23:25.891 "mibps": 39.43470280935553, 00:23:25.891 "io_failed": 0, 00:23:25.891 "io_timeout": 0, 00:23:25.891 "avg_latency_us": 12657.884318071388, 00:23:25.891 "min_latency_us": 1075.6452173913044, 00:23:25.891 "max_latency_us": 3078254.4139130437 00:23:25.891 } 00:23:25.891 ], 00:23:25.891 "core_count": 1 00:23:25.891 } 00:23:26.172 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3688487 00:23:26.172 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:26.172 [2024-12-09 05:17:32.527110] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:23:26.172 [2024-12-09 05:17:32.527167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3688487 ] 00:23:26.172 [2024-12-09 05:17:32.588132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.172 [2024-12-09 05:17:32.628596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.172 Running I/O for 90 seconds... 00:23:26.172 10891.00 IOPS, 42.54 MiB/s [2024-12-09T04:18:02.819Z] 10912.00 IOPS, 42.62 MiB/s [2024-12-09T04:18:02.819Z] 10925.00 IOPS, 42.68 MiB/s [2024-12-09T04:18:02.819Z] 10940.25 IOPS, 42.74 MiB/s [2024-12-09T04:18:02.819Z] 10945.60 IOPS, 42.76 MiB/s [2024-12-09T04:18:02.819Z] 10960.83 IOPS, 42.82 MiB/s [2024-12-09T04:18:02.819Z] 10943.43 IOPS, 42.75 MiB/s [2024-12-09T04:18:02.819Z] 10930.12 IOPS, 42.70 MiB/s [2024-12-09T04:18:02.819Z] 10942.56 IOPS, 42.74 MiB/s [2024-12-09T04:18:02.819Z] 10927.20 IOPS, 42.68 MiB/s [2024-12-09T04:18:02.819Z] 10921.64 IOPS, 42.66 MiB/s [2024-12-09T04:18:02.819Z] 10948.75 IOPS, 42.77 MiB/s [2024-12-09T04:18:02.819Z] [2024-12-09 05:17:46.624981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.625904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.625912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.626142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.173 [2024-12-09 05:17:46.626154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.626170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.173 [2024-12-09 05:17:46.626177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.626192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.173 [2024-12-09 05:17:46.626199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:26.173 [2024-12-09 05:17:46.626213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.626220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.626240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.626260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.626280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.626299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.626319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.626339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.626358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.626379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.626399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.626423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.626448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.626468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.626487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.174 [2024-12-09 05:17:46.626514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.174 [2024-12-09 05:17:46.626539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.174 [2024-12-09 05:17:46.626560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.174 [2024-12-09 05:17:46.626580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.174 [2024-12-09 05:17:46.626600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.174 [2024-12-09 05:17:46.626619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.174 [2024-12-09 05:17:46.626639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.174 [2024-12-09 05:17:46.626852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.626874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.626898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.626919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.626939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.626959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.626983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.626996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.627011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.627024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.627031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.627044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.627050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.627063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.627071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.627083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.627090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.627103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.627109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.627122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.627129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.627143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.627152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.627165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.627172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.627185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.174 [2024-12-09 05:17:46.627192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:26.174 [2024-12-09 05:17:46.627205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.627212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.627225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.627233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.627245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.627252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.627265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.627273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.627290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.627301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.627318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.627329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.627349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.627360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.627746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.627761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.627776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.627784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.627797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.627805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.627820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.627828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.627840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.627848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.627861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.627868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.627880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.627888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.627900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.627908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.627920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.627926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.627939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.627948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.627960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.627968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.627980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.627987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.628006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.628014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.628026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.628033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.628046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.628054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.628068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.628075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.628088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.628094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.628264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.175 [2024-12-09 05:17:46.628275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.628289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.175 [2024-12-09 05:17:46.628297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.628309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.175 [2024-12-09 05:17:46.628317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.628330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.175 [2024-12-09 05:17:46.628338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.628350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.175 [2024-12-09 05:17:46.628357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.628370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.175 [2024-12-09 05:17:46.628378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.628391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.175 [2024-12-09 05:17:46.628398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.628411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.175 [2024-12-09 05:17:46.628418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.628430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.628438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.628451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.628458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.628470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.628483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.628497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.628504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.628517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.628524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.628537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.175 [2024-12-09 05:17:46.628545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.175 [2024-12-09 05:17:46.628558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.628565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.628578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.176 [2024-12-09 05:17:46.628585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.628599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.176 [2024-12-09 05:17:46.628607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.628620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.176 [2024-12-09 05:17:46.628627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.628640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.176 [2024-12-09 05:17:46.628647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.628660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.176 [2024-12-09 05:17:46.628667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.628681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.176 [2024-12-09 05:17:46.628688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.628701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.176 [2024-12-09 05:17:46.628709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.628722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.176 [2024-12-09 05:17:46.628730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.628743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.176 [2024-12-09 05:17:46.628750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.628762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.176 [2024-12-09 05:17:46.628769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.628782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.176 [2024-12-09 05:17:46.628790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.628802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.176 [2024-12-09 05:17:46.628809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.628822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.176 [2024-12-09 05:17:46.628829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.628842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.176 [2024-12-09 05:17:46.628849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.628861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.176 [2024-12-09 05:17:46.628868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.628881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.176 [2024-12-09 05:17:46.628888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.629296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.629320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.629340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.629360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.629382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.629402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.629423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.629442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.629463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.629482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.629502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.629522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.629542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.629562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.629581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.629601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.629623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.629800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.629821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.629841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.176 [2024-12-09 05:17:46.629861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:26.176 [2024-12-09 05:17:46.629874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.177 [2024-12-09 05:17:46.629881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.629894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.177 [2024-12-09 05:17:46.629901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.629914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.177 [2024-12-09 05:17:46.629921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.629934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.177 [2024-12-09 05:17:46.629940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.629954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.177 [2024-12-09 05:17:46.629961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.629973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.177 [2024-12-09 05:17:46.629980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.629993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.177 [2024-12-09 05:17:46.630007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.177 [2024-12-09 05:17:46.630030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.177 [2024-12-09 05:17:46.630049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.177 [2024-12-09 05:17:46.630069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.177 [2024-12-09 05:17:46.630091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.177 [2024-12-09 05:17:46.630110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.177 [2024-12-09 05:17:46.630133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.177 [2024-12-09 05:17:46.630152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.177 [2024-12-09 05:17:46.630172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.177 [2024-12-09 05:17:46.630192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.177 [2024-12-09 05:17:46.630211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.177 [2024-12-09 05:17:46.630232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.177 [2024-12-09 05:17:46.630251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.177 [2024-12-09 05:17:46.630272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.177 [2024-12-09 05:17:46.630292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.177 [2024-12-09 05:17:46.630312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.177 [2024-12-09 05:17:46.630332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.177 [2024-12-09 05:17:46.630351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.177 [2024-12-09 05:17:46.630370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.177 [2024-12-09 05:17:46.630391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.177 [2024-12-09 05:17:46.630412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.177 [2024-12-09 05:17:46.630432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.177 [2024-12-09 05:17:46.630453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.177 [2024-12-09 05:17:46.630473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.630485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.177 [2024-12-09 05:17:46.630492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.640955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.177 [2024-12-09 05:17:46.640965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.640980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.177 [2024-12-09 05:17:46.640987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:26.177 [2024-12-09 05:17:46.641009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.641016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.641036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.641402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.178 [2024-12-09 05:17:46.641423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.178 [2024-12-09 05:17:46.641443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.178 [2024-12-09 05:17:46.641462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.178 [2024-12-09 05:17:46.641482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.178 [2024-12-09 05:17:46.641501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.178 [2024-12-09 05:17:46.641520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.178 [2024-12-09 05:17:46.641541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.178 [2024-12-09 05:17:46.641560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.178 [2024-12-09 05:17:46.641583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.178 [2024-12-09 05:17:46.641602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.178 [2024-12-09 05:17:46.641621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.178 [2024-12-09 05:17:46.641641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.178 [2024-12-09 05:17:46.641660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.178 [2024-12-09 05:17:46.641679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.178 [2024-12-09 05:17:46.641698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.178 [2024-12-09 05:17:46.641717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.641736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.641755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.641773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.641792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.641814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.641833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.641851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.641870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.641889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.641908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.641927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.641945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.641964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.641983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.641995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.642008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.642020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.642027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.642039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.642048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.642060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.642067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.642079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.642086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:26.178 [2024-12-09 05:17:46.642098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.178 [2024-12-09 05:17:46.642104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.179 [2024-12-09 05:17:46.642123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.179 [2024-12-09 05:17:46.642142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.179 [2024-12-09 05:17:46.642160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.179 [2024-12-09 05:17:46.642179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.179 [2024-12-09 05:17:46.642351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.179 [2024-12-09 05:17:46.642370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.179 [2024-12-09 05:17:46.642389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.179 [2024-12-09 05:17:46.642407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.179 [2024-12-09 05:17:46.642427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.179 [2024-12-09 05:17:46.642446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.179 [2024-12-09 05:17:46.642465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.179 [2024-12-09 05:17:46.642773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.179 [2024-12-09 05:17:46.642792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.179 [2024-12-09 05:17:46.642812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.179 [2024-12-09 05:17:46.642830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:26.179 [2024-12-09 05:17:46.642843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.642850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.642862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.642869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.642881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.642888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.642900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.642906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.642919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.642926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.642938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.642945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.642957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.642964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.642976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.642985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.643001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.643009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.643021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.643028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.643040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.643047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.643059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.643066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.643079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.643085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.643769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.643782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.643799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.643806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.643819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.643826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.643838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.643845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.643857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.643864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.643877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.643883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.643896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.643906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.643919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.643926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.643938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.643945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.643957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.643964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.643976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.643983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.643996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.644008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.644021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.644028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.644040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.644047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.644059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.644066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.644079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.644086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.644098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.180 [2024-12-09 05:17:46.644105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.644119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.180 [2024-12-09 05:17:46.644126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.644138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.180 [2024-12-09 05:17:46.644145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.644161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.180 [2024-12-09 05:17:46.644169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.644181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.180 [2024-12-09 05:17:46.644188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.644201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.180 [2024-12-09 05:17:46.644208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.644220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.180 [2024-12-09 05:17:46.644228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.644240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.180 [2024-12-09 05:17:46.644247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.644259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.180 [2024-12-09 05:17:46.644266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:26.180 [2024-12-09 05:17:46.644278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.180 [2024-12-09 05:17:46.644285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.644298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.644304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.644317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.644323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.644336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.644343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.644355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.644362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.644374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.644381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.644395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.644402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.644415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.644421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.644434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.181 [2024-12-09 05:17:46.644441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.644454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.181 [2024-12-09 05:17:46.644460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.644473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.181 [2024-12-09 05:17:46.644481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.644494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.181 [2024-12-09 05:17:46.644500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.644513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.181 [2024-12-09 05:17:46.644520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.644532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.181 [2024-12-09 05:17:46.644539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.644852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.181 [2024-12-09 05:17:46.644861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.644875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.181 [2024-12-09 05:17:46.644881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.644894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.644901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.644913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.644919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.644932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.644940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.644953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.644959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.644972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.644978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.644990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.645002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.645015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.645021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.645033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.645040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.645053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.645060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.645072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.645079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.645091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.645100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.645113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.645119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.645131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.645138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.645151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.645157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.645170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.645178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.645190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.181 [2024-12-09 05:17:46.645197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.645209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.181 [2024-12-09 05:17:46.645216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.645228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.181 [2024-12-09 05:17:46.645235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.645247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.181 [2024-12-09 05:17:46.645254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:26.181 [2024-12-09 05:17:46.645266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.181 [2024-12-09 05:17:46.645272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.645292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.645497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.645517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.645536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.645556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.645574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.645594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.645615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.645634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.645653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.645672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.645690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.645709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.645728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.645747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.645766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.645784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.645803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.645822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.645843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.182 [2024-12-09 05:17:46.645862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.645874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.182 [2024-12-09 05:17:46.651777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.651793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.182 [2024-12-09 05:17:46.651801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.651813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.182 [2024-12-09 05:17:46.651820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.651832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.182 [2024-12-09 05:17:46.651839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.651851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.182 [2024-12-09 05:17:46.651858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.651870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.182 [2024-12-09 05:17:46.651877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.651889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.182 [2024-12-09 05:17:46.651896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.651908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.651915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.651927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.651934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.651946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.651953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.651965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.651973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.651986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.651993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.652302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.652313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.652328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.182 [2024-12-09 05:17:46.652335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.652348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.182 [2024-12-09 05:17:46.652355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.652368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.182 [2024-12-09 05:17:46.652375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:26.182 [2024-12-09 05:17:46.652387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.182 [2024-12-09 05:17:46.652394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.183 [2024-12-09 05:17:46.652413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.183 [2024-12-09 05:17:46.652432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.183 [2024-12-09 05:17:46.652451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.183 [2024-12-09 05:17:46.652470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.183 [2024-12-09 05:17:46.652489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.183 [2024-12-09 05:17:46.652510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.183 [2024-12-09 05:17:46.652529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.183 [2024-12-09 05:17:46.652548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.183 [2024-12-09 05:17:46.652567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.183 [2024-12-09 05:17:46.652586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.183 [2024-12-09 05:17:46.652605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.183 [2024-12-09 05:17:46.652624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.183 [2024-12-09 05:17:46.652643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.652662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.652681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.652700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.652719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.652738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.652758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.652777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.652797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.652816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.652835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.652854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.652874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.652893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.652912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.652931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.652950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.652968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.652982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.652989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.653006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.653014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.653026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.653032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.653045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.653051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.653063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.653070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.653082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.653089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.653101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.653108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.653120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.183 [2024-12-09 05:17:46.653126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:26.183 [2024-12-09 05:17:46.653138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.184 [2024-12-09 05:17:46.653145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.184 [2024-12-09 05:17:46.653164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.184 [2024-12-09 05:17:46.653183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.184 [2024-12-09 05:17:46.653201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.184 [2024-12-09 05:17:46.653221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.184 [2024-12-09 05:17:46.653241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.184 [2024-12-09 05:17:46.653260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.184 [2024-12-09 05:17:46.653278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.184 [2024-12-09 05:17:46.653604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.184 [2024-12-09 05:17:46.653623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.184 [2024-12-09 05:17:46.653643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.184 [2024-12-09 05:17:46.653662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.184 [2024-12-09 05:17:46.653682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.184 [2024-12-09 05:17:46.653704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.184 [2024-12-09 05:17:46.653723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.184 [2024-12-09 05:17:46.653742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.184 [2024-12-09 05:17:46.653876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:26.184 [2024-12-09 05:17:46.653888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.185 [2024-12-09 05:17:46.653895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.653909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.185 [2024-12-09 05:17:46.653915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.653930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.185 [2024-12-09 05:17:46.653937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.653949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.185 [2024-12-09 05:17:46.653957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.653970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.185 [2024-12-09 05:17:46.653978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.653991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.185 [2024-12-09 05:17:46.654003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.654016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.185 [2024-12-09 05:17:46.654023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.654035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.185 [2024-12-09 05:17:46.654042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.654054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.185 [2024-12-09 05:17:46.654061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.654073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.654080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.654092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.654099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.654111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.654118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.654131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.654138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.654873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.654888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.654906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.654914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.654927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.654934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.654946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.654953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.654966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.654973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.654985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.654992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.655011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.655019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.655032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.655038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.655051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.655057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.655069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.655076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.655088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.655095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.655107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.655114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.655127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.655133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.655146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.655157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.655170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.655177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.655189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.655196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.655209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.655215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.655228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.655235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.655247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.655254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.655266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.185 [2024-12-09 05:17:46.655273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.655286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.185 [2024-12-09 05:17:46.655294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.655307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.185 [2024-12-09 05:17:46.655313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.655328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.185 [2024-12-09 05:17:46.655335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.655347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.185 [2024-12-09 05:17:46.655355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.655368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.185 [2024-12-09 05:17:46.655375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:26.185 [2024-12-09 05:17:46.655387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.186 [2024-12-09 05:17:46.655395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.655410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.186 [2024-12-09 05:17:46.655417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.655429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.186 [2024-12-09 05:17:46.655437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.655450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.186 [2024-12-09 05:17:46.655457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.655471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.186 [2024-12-09 05:17:46.655477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.655489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.186 [2024-12-09 05:17:46.655497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.655511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.186 [2024-12-09 05:17:46.655518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.655789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.186 [2024-12-09 05:17:46.655800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.655814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.186 [2024-12-09 05:17:46.655822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.655837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.186 [2024-12-09 05:17:46.655844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.655857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.186 [2024-12-09 05:17:46.655866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.655880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.186 [2024-12-09 05:17:46.655887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.655899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.186 [2024-12-09 05:17:46.655906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.655920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.186 [2024-12-09 05:17:46.655927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.655940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.186 [2024-12-09 05:17:46.655947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.655959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.186 [2024-12-09 05:17:46.655966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.655978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.186 [2024-12-09 05:17:46.655984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.655997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.186 [2024-12-09 05:17:46.656009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.656022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.186 [2024-12-09 05:17:46.656028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.656040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.186 [2024-12-09 05:17:46.656049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.656063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.186 [2024-12-09 05:17:46.656070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.656082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.186 [2024-12-09 05:17:46.656089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.656101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.186 [2024-12-09 05:17:46.656108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.656121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.186 [2024-12-09 05:17:46.656127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.656140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.186 [2024-12-09 05:17:46.656146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.656161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.186 [2024-12-09 05:17:46.656168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.656180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.186 [2024-12-09 05:17:46.656187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.656199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.186 [2024-12-09 05:17:46.656206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.656218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.186 [2024-12-09 05:17:46.656225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.656238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.186 [2024-12-09 05:17:46.656244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.656474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.186 [2024-12-09 05:17:46.656484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.656497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.186 [2024-12-09 05:17:46.656504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.656517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.186 [2024-12-09 05:17:46.656525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.656537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.186 [2024-12-09 05:17:46.656546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:26.186 [2024-12-09 05:17:46.656559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.186 [2024-12-09 05:17:46.656568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.656582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.656591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.656606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.656614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.656629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.656639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.656652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.656660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.656672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.656679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.656692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.656699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.656711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.656718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.656731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.656738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.656752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.656759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.656771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.656778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.656790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.656799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.656812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.656820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.656996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.657012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.657032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.657054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.657074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.657093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.657113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.657132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.657151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.657170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.657188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.657208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.187 [2024-12-09 05:17:46.657226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.187 [2024-12-09 05:17:46.657248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.187 [2024-12-09 05:17:46.657267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.187 [2024-12-09 05:17:46.657286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.187 [2024-12-09 05:17:46.657307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.187 [2024-12-09 05:17:46.657326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.187 [2024-12-09 05:17:46.657346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.187 [2024-12-09 05:17:46.657365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.187 [2024-12-09 05:17:46.657384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.187 [2024-12-09 05:17:46.657403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.187 [2024-12-09 05:17:46.657422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.187 [2024-12-09 05:17:46.657441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.187 [2024-12-09 05:17:46.657460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.187 [2024-12-09 05:17:46.657479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:26.187 [2024-12-09 05:17:46.657492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.188 [2024-12-09 05:17:46.657498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.657511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.188 [2024-12-09 05:17:46.657517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.657531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.188 [2024-12-09 05:17:46.657538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.657551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.188 [2024-12-09 05:17:46.657558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.657570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.188 [2024-12-09 05:17:46.657577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.657589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.188 [2024-12-09 05:17:46.657596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.657852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.188 [2024-12-09 05:17:46.657862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.657875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.188 [2024-12-09 05:17:46.657883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.657895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.188 [2024-12-09 05:17:46.657902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.657914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.188 [2024-12-09 05:17:46.657921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.657933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.188 [2024-12-09 05:17:46.657940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.657952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.188 [2024-12-09 05:17:46.657959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.657972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.188 [2024-12-09 05:17:46.657978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.657990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.188 [2024-12-09 05:17:46.658004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.188 [2024-12-09 05:17:46.658026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.188 [2024-12-09 05:17:46.658045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.188 [2024-12-09 05:17:46.658065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.188 [2024-12-09 05:17:46.658084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.188 [2024-12-09 05:17:46.658104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.188 [2024-12-09 05:17:46.658124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.188 [2024-12-09 05:17:46.658143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.188 [2024-12-09 05:17:46.658162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.188 [2024-12-09 05:17:46.658181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.188 [2024-12-09 05:17:46.658200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.188 [2024-12-09 05:17:46.658220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.188 [2024-12-09 05:17:46.658239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.188 [2024-12-09 05:17:46.658260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.188 [2024-12-09 05:17:46.658279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.188 [2024-12-09 05:17:46.658298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.188 [2024-12-09 05:17:46.658317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.188 [2024-12-09 05:17:46.658521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.188 [2024-12-09 05:17:46.658541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.188 [2024-12-09 05:17:46.658560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.188 [2024-12-09 05:17:46.658579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.188 [2024-12-09 05:17:46.658598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.188 [2024-12-09 05:17:46.658618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.188 [2024-12-09 05:17:46.658637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.188 [2024-12-09 05:17:46.658656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:26.188 [2024-12-09 05:17:46.658668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.189 [2024-12-09 05:17:46.658675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.658689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.189 [2024-12-09 05:17:46.658696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.658709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.189 [2024-12-09 05:17:46.658716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.658728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.189 [2024-12-09 05:17:46.658735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.658747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.189 [2024-12-09 05:17:46.658754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.658766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.189 [2024-12-09 05:17:46.658772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.658785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.189 [2024-12-09 05:17:46.658792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.658804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.189 [2024-12-09 05:17:46.658811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.658823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.189 [2024-12-09 05:17:46.658829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.658842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.189 [2024-12-09 05:17:46.658848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.658861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.189 [2024-12-09 05:17:46.658868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.658880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.189 [2024-12-09 05:17:46.658886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.662714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.189 [2024-12-09 05:17:46.662724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.662740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.189 [2024-12-09 05:17:46.662747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.662759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.189 [2024-12-09 05:17:46.662766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.662779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.189 [2024-12-09 05:17:46.662786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.662798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.189 [2024-12-09 05:17:46.662805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.662818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.189 [2024-12-09 05:17:46.662824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.662837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.189 [2024-12-09 05:17:46.662843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.662856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.189 [2024-12-09 05:17:46.662862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.662875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.189 [2024-12-09 05:17:46.662881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.662894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.189 [2024-12-09 05:17:46.662900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.662912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.189 [2024-12-09 05:17:46.662919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.662932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.189 [2024-12-09 05:17:46.662938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.663250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.189 [2024-12-09 05:17:46.663263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.663278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.189 [2024-12-09 05:17:46.663287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.663300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.189 [2024-12-09 05:17:46.663306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.663319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.189 [2024-12-09 05:17:46.663325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.663338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.189 [2024-12-09 05:17:46.663344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.663357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.189 [2024-12-09 05:17:46.663364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.663376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.189 [2024-12-09 05:17:46.663383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.663395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.189 [2024-12-09 05:17:46.663402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.663414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.189 [2024-12-09 05:17:46.663421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.663434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.189 [2024-12-09 05:17:46.663440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.663453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.189 [2024-12-09 05:17:46.663460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.663472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.189 [2024-12-09 05:17:46.663479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.663491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.189 [2024-12-09 05:17:46.663497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.663510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.189 [2024-12-09 05:17:46.663518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:26.189 [2024-12-09 05:17:46.663531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.190 [2024-12-09 05:17:46.663538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.190 [2024-12-09 05:17:46.663557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.190 [2024-12-09 05:17:46.663576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.190 [2024-12-09 05:17:46.663595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.190 [2024-12-09 05:17:46.663614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.190 [2024-12-09 05:17:46.663633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.663652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.663672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.663691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.663710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.663729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.663748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.663769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.663788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.663807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.663826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.663845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.663864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.663883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.663902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.663921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.663940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.663959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.663978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.663994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.664006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.664019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.664025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.664038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.664044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.664057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.664063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.664076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.664083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.664095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.664102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.664114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.664121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.664133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.664140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.664152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.664159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.664171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.664178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.664190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.190 [2024-12-09 05:17:46.664197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:26.190 [2024-12-09 05:17:46.664209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.191 [2024-12-09 05:17:46.664215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.191 [2024-12-09 05:17:46.664237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.191 [2024-12-09 05:17:46.664256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.191 [2024-12-09 05:17:46.664275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.191 [2024-12-09 05:17:46.664602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.191 [2024-12-09 05:17:46.664621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.191 [2024-12-09 05:17:46.664640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.191 [2024-12-09 05:17:46.664659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.191 [2024-12-09 05:17:46.664678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.191 [2024-12-09 05:17:46.664698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.191 [2024-12-09 05:17:46.664717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.191 [2024-12-09 05:17:46.664736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.191 [2024-12-09 05:17:46.664928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:26.191 [2024-12-09 05:17:46.664942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-12-09 05:17:46.664949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.664961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-12-09 05:17:46.664968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.664981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-12-09 05:17:46.664988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.665003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-12-09 05:17:46.665011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.665023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-12-09 05:17:46.665030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.665042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-12-09 05:17:46.665050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.665062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.665069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.665081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.665088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.665816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.665829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.665844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.665851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.665864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.665871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.665883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.665890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.665905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.665912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.665924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.665931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.665945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.665952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.665965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.665974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.665987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.665995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.666021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.666042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.666063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.666084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.666106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.666126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.666148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.666170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.666189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.666208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.666229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.666248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.666268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-12-09 05:17:46.666288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-12-09 05:17:46.666308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-12-09 05:17:46.666329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-12-09 05:17:46.666348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-12-09 05:17:46.666368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-12-09 05:17:46.666387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-12-09 05:17:46.666408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.192 [2024-12-09 05:17:46.666428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:26.192 [2024-12-09 05:17:46.666440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.192 [2024-12-09 05:17:46.666446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.666460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.666466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.666735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.666744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.666758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.666765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.666777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.666784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.666796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.666803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.666815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.666822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.666834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-12-09 05:17:46.666841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.666854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-12-09 05:17:46.666860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.666873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-12-09 05:17:46.666879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.666892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-12-09 05:17:46.666898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.666913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-12-09 05:17:46.666920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.666932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-12-09 05:17:46.666939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.666951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-12-09 05:17:46.666958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.666970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-12-09 05:17:46.666977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.666989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-12-09 05:17:46.666996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-12-09 05:17:46.667022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-12-09 05:17:46.667041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-12-09 05:17:46.667060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-12-09 05:17:46.667079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-12-09 05:17:46.667098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-12-09 05:17:46.667117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.193 [2024-12-09 05:17:46.667136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.667157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.667176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.667382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.667402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.667421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.667440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.667459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.667479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.667499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.667517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.667536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.667555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.667577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.667597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.667616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.667635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:26.193 [2024-12-09 05:17:46.667647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.193 [2024-12-09 05:17:46.667654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.667667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.667674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.667687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.667694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.667869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.667878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.667892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.667898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.667911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.667917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.667930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.667937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.667950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.667957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.667969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.667978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.667990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.668002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.668022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.668041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.668061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.668079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.668099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.668117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.668137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.194 [2024-12-09 05:17:46.668156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.194 [2024-12-09 05:17:46.668175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.194 [2024-12-09 05:17:46.668194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.194 [2024-12-09 05:17:46.668213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.194 [2024-12-09 05:17:46.668233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.194 [2024-12-09 05:17:46.668252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.194 [2024-12-09 05:17:46.668271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.194 [2024-12-09 05:17:46.668290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.194 [2024-12-09 05:17:46.668309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.194 [2024-12-09 05:17:46.668328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.194 [2024-12-09 05:17:46.668347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.194 [2024-12-09 05:17:46.668366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.194 [2024-12-09 05:17:46.668385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.194 [2024-12-09 05:17:46.668404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.194 [2024-12-09 05:17:46.668423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.194 [2024-12-09 05:17:46.668442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.668464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.668722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.668742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.668761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.668781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.668804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:26.194 [2024-12-09 05:17:46.668817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.194 [2024-12-09 05:17:46.668824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.668837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.195 [2024-12-09 05:17:46.668844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.668856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.195 [2024-12-09 05:17:46.668863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.668875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.195 [2024-12-09 05:17:46.668882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.668894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.195 [2024-12-09 05:17:46.668901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.668913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.195 [2024-12-09 05:17:46.668920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.668932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.195 [2024-12-09 05:17:46.668941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.668954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.195 [2024-12-09 05:17:46.668961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.668973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.195 [2024-12-09 05:17:46.668980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.668992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.195 [2024-12-09 05:17:46.669003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.195 [2024-12-09 05:17:46.669023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.195 [2024-12-09 05:17:46.669042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.195 [2024-12-09 05:17:46.669061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.195 [2024-12-09 05:17:46.669080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.195 [2024-12-09 05:17:46.669100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.195 [2024-12-09 05:17:46.669120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.195 [2024-12-09 05:17:46.669140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.195 [2024-12-09 05:17:46.669160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.195 [2024-12-09 05:17:46.669183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.195 [2024-12-09 05:17:46.669388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.195 [2024-12-09 05:17:46.669410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.195 [2024-12-09 05:17:46.669429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.195 [2024-12-09 05:17:46.669450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.195 [2024-12-09 05:17:46.669469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.195 [2024-12-09 05:17:46.669488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.195 [2024-12-09 05:17:46.669508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.195 [2024-12-09 05:17:46.669529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.195 [2024-12-09 05:17:46.669550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.195 [2024-12-09 05:17:46.669570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.195 [2024-12-09 05:17:46.669591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.195 [2024-12-09 05:17:46.669612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.195 [2024-12-09 05:17:46.669633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.195 [2024-12-09 05:17:46.669653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.195 [2024-12-09 05:17:46.669673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.195 [2024-12-09 05:17:46.669692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.195 [2024-12-09 05:17:46.669712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:26.195 [2024-12-09 05:17:46.669725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.196 [2024-12-09 05:17:46.669732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.669746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.196 [2024-12-09 05:17:46.669753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.669766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.196 [2024-12-09 05:17:46.669773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.669786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.196 [2024-12-09 05:17:46.669792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.669805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.196 [2024-12-09 05:17:46.669812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.669825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.196 [2024-12-09 05:17:46.669831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.669844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.669851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.669866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.669873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.669885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.669892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.669904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.669911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.669924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.669931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.669943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.669951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.669963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.669971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.669984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.669991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.196 [2024-12-09 05:17:46.670017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.196 [2024-12-09 05:17:46.670296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.196 [2024-12-09 05:17:46.670317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.196 [2024-12-09 05:17:46.670336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.196 [2024-12-09 05:17:46.670355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.196 [2024-12-09 05:17:46.670376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.196 [2024-12-09 05:17:46.670395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.670414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.670434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.670453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.670472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.670491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.670509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.670529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.670548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.670567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.670586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.670607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.670627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.670647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.670665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.670685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.196 [2024-12-09 05:17:46.670703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.196 [2024-12-09 05:17:46.670722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:26.196 [2024-12-09 05:17:46.670735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.196 [2024-12-09 05:17:46.670742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.197 [2024-12-09 05:17:46.671782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-12-09 05:17:46.671803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-12-09 05:17:46.671822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-12-09 05:17:46.671841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-12-09 05:17:46.671860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-12-09 05:17:46.671879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-12-09 05:17:46.671898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-12-09 05:17:46.671917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:26.197 [2024-12-09 05:17:46.671930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.197 [2024-12-09 05:17:46.671937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.671949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.671956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.671969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.671975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.671988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.671996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.672023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.672042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.672062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.672081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.672100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.198 [2024-12-09 05:17:46.672121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.198 [2024-12-09 05:17:46.672388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.198 [2024-12-09 05:17:46.672410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.198 [2024-12-09 05:17:46.672430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.198 [2024-12-09 05:17:46.672449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.198 [2024-12-09 05:17:46.672469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.198 [2024-12-09 05:17:46.672489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.198 [2024-12-09 05:17:46.672512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.672533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.672553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.672574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.672594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.672614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.672634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.672653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.672673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.672693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.672713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.672732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.672752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.672771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.672790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.672809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.198 [2024-12-09 05:17:46.672828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.672841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.198 [2024-12-09 05:17:46.672847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.673047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.198 [2024-12-09 05:17:46.673057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.673071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.198 [2024-12-09 05:17:46.673078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.673090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.198 [2024-12-09 05:17:46.673097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.673109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.198 [2024-12-09 05:17:46.673116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.673128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.198 [2024-12-09 05:17:46.673135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.198 [2024-12-09 05:17:46.673148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-12-09 05:17:46.673501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-12-09 05:17:46.673521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-12-09 05:17:46.673540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-12-09 05:17:46.673559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-12-09 05:17:46.673578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-12-09 05:17:46.673597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-12-09 05:17:46.673617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-12-09 05:17:46.673637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.673986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.673993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.674010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.674017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.674030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.674036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.674049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.199 [2024-12-09 05:17:46.674056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.674068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-12-09 05:17:46.674075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.674089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.199 [2024-12-09 05:17:46.674096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:26.199 [2024-12-09 05:17:46.674109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-12-09 05:17:46.674115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-12-09 05:17:46.674135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-12-09 05:17:46.674154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-12-09 05:17:46.674176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-12-09 05:17:46.674195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-12-09 05:17:46.674215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-12-09 05:17:46.674234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-12-09 05:17:46.674253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-12-09 05:17:46.674272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-12-09 05:17:46.674292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-12-09 05:17:46.674311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-12-09 05:17:46.674330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-12-09 05:17:46.674350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.200 [2024-12-09 05:17:46.674369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.674388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.674412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.674685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.674705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.674725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.674746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.674765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.674785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.674804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.674823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.674842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.674861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.674880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.674903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.674922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.674941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.674960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.674980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.674992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.675004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.675167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.675177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.675190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.675197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.675210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.675217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.675229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.675236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.675248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.675255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.675267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.200 [2024-12-09 05:17:46.675275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:26.200 [2024-12-09 05:17:46.675287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.201 [2024-12-09 05:17:46.675296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.201 [2024-12-09 05:17:46.675315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.201 [2024-12-09 05:17:46.675334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.201 [2024-12-09 05:17:46.675353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.201 [2024-12-09 05:17:46.675372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.201 [2024-12-09 05:17:46.675391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.201 [2024-12-09 05:17:46.675410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.201 [2024-12-09 05:17:46.675429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.675449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.675468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.675488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.675507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.675526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.675547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.675566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.675586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.675606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.675626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.675645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.675664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.675683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.675702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.675721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.675740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.675753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.201 [2024-12-09 05:17:46.675760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.676026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.201 [2024-12-09 05:17:46.676037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.676051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.201 [2024-12-09 05:17:46.676058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.676070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.201 [2024-12-09 05:17:46.676077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.676089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.201 [2024-12-09 05:17:46.676096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.676109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.201 [2024-12-09 05:17:46.676115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.676128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.201 [2024-12-09 05:17:46.676134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.676146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.201 [2024-12-09 05:17:46.676153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.676165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.676172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.676184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.676191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.676204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.676211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.676223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.676230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.676242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.676251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.676263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.676272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:26.201 [2024-12-09 05:17:46.676284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.201 [2024-12-09 05:17:46.676291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-12-09 05:17:46.676310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-12-09 05:17:46.676332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-12-09 05:17:46.676352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-12-09 05:17:46.676371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-12-09 05:17:46.676390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-12-09 05:17:46.676409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-12-09 05:17:46.676428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-12-09 05:17:46.676447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-12-09 05:17:46.676467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.676486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.676693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.676713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.676732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.676751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.676770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.676789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.676808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.676828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.676847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.676866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.676886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.676905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.676924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.676945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.676964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.676984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.676996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.677009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.677022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.677030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.677042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.677049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.677061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.677068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.677080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.677087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.677099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.677106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.677118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.202 [2024-12-09 05:17:46.677125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.677137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-12-09 05:17:46.677144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.677157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-12-09 05:17:46.680694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.680712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-12-09 05:17:46.680721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.680734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-12-09 05:17:46.680741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.680753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-12-09 05:17:46.680760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:26.202 [2024-12-09 05:17:46.680772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.202 [2024-12-09 05:17:46.680779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.680792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-12-09 05:17:46.680798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.680811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-12-09 05:17:46.680817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.680830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.203 [2024-12-09 05:17:46.680837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.203 [2024-12-09 05:17:46.681134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.203 [2024-12-09 05:17:46.681155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.203 [2024-12-09 05:17:46.681174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.203 [2024-12-09 05:17:46.681193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.203 [2024-12-09 05:17:46.681212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.203 [2024-12-09 05:17:46.681234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-12-09 05:17:46.681253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-12-09 05:17:46.681272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-12-09 05:17:46.681291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-12-09 05:17:46.681310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-12-09 05:17:46.681330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-12-09 05:17:46.681349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-12-09 05:17:46.681368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-12-09 05:17:46.681387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-12-09 05:17:46.681406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-12-09 05:17:46.681425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-12-09 05:17:46.681444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-12-09 05:17:46.681464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-12-09 05:17:46.681483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-12-09 05:17:46.681502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-12-09 05:17:46.681521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.203 [2024-12-09 05:17:46.681540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.203 [2024-12-09 05:17:46.681559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.203 [2024-12-09 05:17:46.681579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.203 [2024-12-09 05:17:46.681598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.203 [2024-12-09 05:17:46.681617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.203 [2024-12-09 05:17:46.681637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.203 [2024-12-09 05:17:46.681656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.203 [2024-12-09 05:17:46.681676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.203 [2024-12-09 05:17:46.681695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.203 [2024-12-09 05:17:46.681716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.203 [2024-12-09 05:17:46.681735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.203 [2024-12-09 05:17:46.681755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.203 [2024-12-09 05:17:46.681773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.203 [2024-12-09 05:17:46.681792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:26.203 [2024-12-09 05:17:46.681805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.203 [2024-12-09 05:17:46.681811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.681824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.204 [2024-12-09 05:17:46.681830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.681842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.204 [2024-12-09 05:17:46.681849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.681861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.204 [2024-12-09 05:17:46.681868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.681881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.204 [2024-12-09 05:17:46.681887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.681900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.204 [2024-12-09 05:17:46.681906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.681919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.204 [2024-12-09 05:17:46.681925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.681939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.204 [2024-12-09 05:17:46.681946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.681958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.204 [2024-12-09 05:17:46.681965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.681977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.204 [2024-12-09 05:17:46.681984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.681996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.204 [2024-12-09 05:17:46.682009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.204 [2024-12-09 05:17:46.682028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.204 [2024-12-09 05:17:46.682047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.204 [2024-12-09 05:17:46.682066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.204 [2024-12-09 05:17:46.682085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.204 [2024-12-09 05:17:46.682104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.204 [2024-12-09 05:17:46.682123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.204 [2024-12-09 05:17:46.682141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.204 [2024-12-09 05:17:46.682161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.204 [2024-12-09 05:17:46.682183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-12-09 05:17:46.682202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-12-09 05:17:46.682222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-12-09 05:17:46.682241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-12-09 05:17:46.682260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-12-09 05:17:46.682279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-12-09 05:17:46.682300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-12-09 05:17:46.682319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-12-09 05:17:46.682339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-12-09 05:17:46.682358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-12-09 05:17:46.682377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-12-09 05:17:46.682396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-12-09 05:17:46.682417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-12-09 05:17:46.682436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-12-09 05:17:46.682455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-12-09 05:17:46.682474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.204 [2024-12-09 05:17:46.682495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:26.204 [2024-12-09 05:17:46.682508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.682515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.682535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.682555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.682574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.682593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.682612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.682630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.682653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-12-09 05:17:46.682674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-12-09 05:17:46.682693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-12-09 05:17:46.682713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-12-09 05:17:46.682733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-12-09 05:17:46.682752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-12-09 05:17:46.682772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-12-09 05:17:46.682791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-12-09 05:17:46.682810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-12-09 05:17:46.682830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-12-09 05:17:46.682849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-12-09 05:17:46.682869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-12-09 05:17:46.682889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-12-09 05:17:46.682911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-12-09 05:17:46.682932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-12-09 05:17:46.682952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.682965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.205 [2024-12-09 05:17:46.682972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.683704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.683718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.683733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.683740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.683753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.683760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.683772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.683780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.683793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.683800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.683813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.683819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.683832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.683839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.683851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.683858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.683874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.683882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.683895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.683903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.683916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.683924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.683937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.683945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.683957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.683964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.683976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.205 [2024-12-09 05:17:46.683984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:26.205 [2024-12-09 05:17:46.683996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.206 [2024-12-09 05:17:46.684008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.206 [2024-12-09 05:17:46.684028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.206 [2024-12-09 05:17:46.684048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.206 [2024-12-09 05:17:46.684067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.206 [2024-12-09 05:17:46.684088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.206 [2024-12-09 05:17:46.684108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.206 [2024-12-09 05:17:46.684129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.206 [2024-12-09 05:17:46.684149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.206 [2024-12-09 05:17:46.684167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.206 [2024-12-09 05:17:46.684186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.206 [2024-12-09 05:17:46.684473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.206 [2024-12-09 05:17:46.684505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.206 [2024-12-09 05:17:46.684525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.206 [2024-12-09 05:17:46.684546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.206 [2024-12-09 05:17:46.684567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.206 [2024-12-09 05:17:46.684588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.206 [2024-12-09 05:17:46.684609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.206 [2024-12-09 05:17:46.684901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:26.206 [2024-12-09 05:17:46.684915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.207 [2024-12-09 05:17:46.684922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.684936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.207 [2024-12-09 05:17:46.684943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.685555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.685564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.685580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.685587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.685604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.685611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.685626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.685633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.685648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.685655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.685670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.685677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.685691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.685698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.685713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.685720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.685735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.685742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.685757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.685763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.685778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.685785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.685800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.685807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.685822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.685829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.685844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.685851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.685866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.685874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.685889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.685896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.685911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.685918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.685970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.685978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.685995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.686007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.686023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.686029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.686046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.686052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.686068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.686075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.686091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.686098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.686114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.686120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.686136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.686143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.686159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.686166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.686182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.686190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.686206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.686213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.686229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.686236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.686252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.686258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.686274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.686281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.686297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.686304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.686320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.207 [2024-12-09 05:17:46.686327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.686343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.207 [2024-12-09 05:17:46.686350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.686366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.207 [2024-12-09 05:17:46.686373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.686388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.207 [2024-12-09 05:17:46.686395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.686411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.207 [2024-12-09 05:17:46.686418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:26.207 [2024-12-09 05:17:46.686434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.686441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.686463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.686488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.686511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.686534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.686557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.686579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.686602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.686625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.686648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.686671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.686693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.208 [2024-12-09 05:17:46.686716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.208 [2024-12-09 05:17:46.686739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.208 [2024-12-09 05:17:46.686763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.208 [2024-12-09 05:17:46.686786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.208 [2024-12-09 05:17:46.686809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.208 [2024-12-09 05:17:46.686831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.208 [2024-12-09 05:17:46.686854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.208 [2024-12-09 05:17:46.686876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.686899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.686922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.686945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.686968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.686984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.686991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.687011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.687018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.687034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.687043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.687059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.687066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.687082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.687089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.687105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.687111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.687128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.687134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.687150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.687158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.687174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.687181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.687198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.687205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.687221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.208 [2024-12-09 05:17:46.687228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:26.208 [2024-12-09 05:17:46.687244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.209 [2024-12-09 05:17:46.687251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:26.209 10791.92 IOPS, 42.16 MiB/s [2024-12-09T04:18:02.855Z] 10021.07 IOPS, 39.14 MiB/s [2024-12-09T04:18:02.855Z] 9353.00 IOPS, 36.54 MiB/s [2024-12-09T04:18:02.855Z] 8863.06 IOPS, 34.62 MiB/s [2024-12-09T04:18:02.855Z] 8972.76 IOPS, 35.05 MiB/s [2024-12-09T04:18:02.855Z] 9069.44 IOPS, 35.43 MiB/s [2024-12-09T04:18:02.855Z] 9232.37 IOPS, 36.06 MiB/s [2024-12-09T04:18:02.855Z] 9432.70 IOPS, 36.85 MiB/s [2024-12-09T04:18:02.855Z] 9596.14 IOPS, 37.48 MiB/s [2024-12-09T04:18:02.855Z] 9652.41 IOPS, 37.70 MiB/s [2024-12-09T04:18:02.855Z] 9696.61 IOPS, 37.88 MiB/s [2024-12-09T04:18:02.855Z] 9754.92 IOPS, 38.11 MiB/s [2024-12-09T04:18:02.855Z] 9886.88 IOPS, 38.62 MiB/s [2024-12-09T04:18:02.855Z] 10003.65 IOPS, 39.08 MiB/s [2024-12-09T04:18:02.855Z] [2024-12-09 05:18:00.195719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.195764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.195805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.195816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.195830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.195838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.195851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.195858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.195870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.195877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.195890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.195898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.195913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.195920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.195932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.195940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.209 [2024-12-09 05:18:00.197893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.209 [2024-12-09 05:18:00.197901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.210 [2024-12-09 05:18:00.197913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.210 [2024-12-09 05:18:00.197920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:26.210 [2024-12-09 05:18:00.197934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.210 [2024-12-09 05:18:00.197941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:26.210 [2024-12-09 05:18:00.197958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.210 [2024-12-09 05:18:00.197966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:26.210 [2024-12-09 05:18:00.197980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.210 [2024-12-09 05:18:00.197988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:26.210 [2024-12-09 05:18:00.198007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.210 [2024-12-09 05:18:00.198016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:26.210 [2024-12-09 05:18:00.198854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.210 [2024-12-09 05:18:00.198868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:26.210 [2024-12-09 05:18:00.198884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.210 [2024-12-09 05:18:00.198892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:26.210 [2024-12-09 05:18:00.198906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.210 [2024-12-09 05:18:00.198913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:26.210 [2024-12-09 05:18:00.198926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.210 [2024-12-09 05:18:00.198934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:26.210 [2024-12-09 05:18:00.198948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.210 [2024-12-09 05:18:00.198956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:26.210 [2024-12-09 05:18:00.198969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.210 [2024-12-09 05:18:00.198976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:26.210 [2024-12-09 05:18:00.198988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.210 [2024-12-09 05:18:00.198996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:26.210 [2024-12-09 05:18:00.199017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:26.210 [2024-12-09 05:18:00.199025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:26.210 10057.30 IOPS, 39.29 MiB/s [2024-12-09T04:18:02.856Z] 10082.75 IOPS, 39.39 MiB/s [2024-12-09T04:18:02.856Z] Received shutdown signal, test time was about 28.656925 seconds 00:23:26.210 00:23:26.210 Latency(us) 00:23:26.210 [2024-12-09T04:18:02.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.210 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:26.210 Verification LBA range: start 0x0 length 0x4000 00:23:26.210 Nvme0n1 : 28.66 10095.28 39.43 0.00 0.00 12657.88 1075.65 3078254.41 00:23:26.210 [2024-12-09T04:18:02.856Z] =================================================================================================================== 00:23:26.210 [2024-12-09T04:18:02.856Z] Total : 10095.28 39.43 0.00 0.00 12657.88 1075.65 3078254.41 00:23:26.210 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:26.468 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:26.468 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:26.468 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:26.468 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:26.468 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:26.468 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:26.468 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:26.468 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:26.468 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:26.468 rmmod nvme_tcp 00:23:26.468 rmmod nvme_fabrics 00:23:26.468 rmmod nvme_keyring 00:23:26.468 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:26.468 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:26.468 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:26.468 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3688236 ']' 00:23:26.468 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3688236 00:23:26.468 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3688236 ']' 00:23:26.468 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3688236 00:23:26.468 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:26.468 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.468 05:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3688236 00:23:26.468 05:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:26.468 05:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:26.468 05:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3688236' 00:23:26.468 killing process with pid 3688236 00:23:26.468 05:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3688236 00:23:26.468 05:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3688236 00:23:26.727 05:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:26.727 05:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:26.727 05:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:26.727 05:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:23:26.727 05:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:23:26.727 05:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:23:26.727 05:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:26.727 05:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:26.727 05:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:26.727 05:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.727 05:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.727 05:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:29.256 00:23:29.256 real 0m39.559s 00:23:29.256 user 1m49.012s 00:23:29.256 sys 0m10.755s 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:29.256 ************************************ 00:23:29.256 END TEST nvmf_host_multipath_status 00:23:29.256 ************************************ 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.256 ************************************ 00:23:29.256 START TEST nvmf_discovery_remove_ifc 00:23:29.256 ************************************ 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:29.256 * Looking for test storage... 00:23:29.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:29.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.256 --rc genhtml_branch_coverage=1 00:23:29.256 --rc genhtml_function_coverage=1 00:23:29.256 --rc genhtml_legend=1 00:23:29.256 --rc geninfo_all_blocks=1 00:23:29.256 --rc geninfo_unexecuted_blocks=1 00:23:29.256 00:23:29.256 ' 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:29.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.256 --rc genhtml_branch_coverage=1 00:23:29.256 --rc genhtml_function_coverage=1 00:23:29.256 --rc genhtml_legend=1 00:23:29.256 --rc geninfo_all_blocks=1 00:23:29.256 --rc geninfo_unexecuted_blocks=1 00:23:29.256 00:23:29.256 ' 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:29.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.256 --rc genhtml_branch_coverage=1 00:23:29.256 --rc genhtml_function_coverage=1 00:23:29.256 --rc genhtml_legend=1 00:23:29.256 --rc geninfo_all_blocks=1 00:23:29.256 --rc geninfo_unexecuted_blocks=1 00:23:29.256 00:23:29.256 ' 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:29.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.256 --rc genhtml_branch_coverage=1 00:23:29.256 --rc genhtml_function_coverage=1 00:23:29.256 --rc genhtml_legend=1 00:23:29.256 --rc geninfo_all_blocks=1 00:23:29.256 --rc geninfo_unexecuted_blocks=1 00:23:29.256 00:23:29.256 ' 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:29.256 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:29.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:23:29.257 05:18:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:34.525 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:34.525 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:34.525 Found net devices under 0000:86:00.0: cvl_0_0 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:34.525 Found net devices under 0000:86:00.1: cvl_0_1 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:34.525 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.526 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:34.526 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:34.526 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:34.526 05:18:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:34.526 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:34.526 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:34.526 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:34.526 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:34.526 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:34.526 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:34.526 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:34.526 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:34.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:23:34.526 00:23:34.526 --- 10.0.0.2 ping statistics --- 00:23:34.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.526 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:23:34.526 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:34.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:23:34.526 00:23:34.526 --- 10.0.0.1 ping statistics --- 00:23:34.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.526 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:23:34.526 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.526 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:23:34.526 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:34.526 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.526 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:34.526 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:34.526 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.526 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:34.785 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:34.785 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:34.785 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:34.785 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:34.785 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:34.785 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3697551 00:23:34.785 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3697551 00:23:34.785 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:34.785 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3697551 ']' 00:23:34.785 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.785 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.785 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.785 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.785 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:34.785 [2024-12-09 05:18:11.263840] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:23:34.786 [2024-12-09 05:18:11.263887] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.786 [2024-12-09 05:18:11.331798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.786 [2024-12-09 05:18:11.370565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.786 [2024-12-09 05:18:11.370600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.786 [2024-12-09 05:18:11.370607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.786 [2024-12-09 05:18:11.370613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.786 [2024-12-09 05:18:11.370618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.786 [2024-12-09 05:18:11.371225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.045 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.045 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:35.045 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:35.045 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:35.045 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.045 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.045 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:35.045 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.045 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.045 [2024-12-09 05:18:11.515932] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.045 [2024-12-09 05:18:11.524151] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:35.045 null0 00:23:35.045 [2024-12-09 05:18:11.556097] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.045 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.045 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3697571 00:23:35.045 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3697571 /tmp/host.sock 00:23:35.045 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3697571 ']' 00:23:35.045 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:35.045 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.045 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:35.045 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:35.045 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.045 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.045 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:35.045 [2024-12-09 05:18:11.623842] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:23:35.045 [2024-12-09 05:18:11.623885] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3697571 ] 00:23:35.306 [2024-12-09 05:18:11.689419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.306 [2024-12-09 05:18:11.732512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.306 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.306 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:35.306 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:35.306 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:35.306 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.306 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.306 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.306 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:35.306 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.306 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.306 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.306 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:35.306 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.306 05:18:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:36.684 [2024-12-09 05:18:12.903528] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:36.684 [2024-12-09 05:18:12.903549] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:36.684 [2024-12-09 05:18:12.903565] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:36.684 [2024-12-09 05:18:13.031969] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:36.684 [2024-12-09 05:18:13.253201] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:36.684 [2024-12-09 05:18:13.254017] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x932a50:1 started. 00:23:36.684 [2024-12-09 05:18:13.255217] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:36.684 [2024-12-09 05:18:13.255262] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:36.684 [2024-12-09 05:18:13.255285] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:36.684 [2024-12-09 05:18:13.255298] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:36.684 [2024-12-09 05:18:13.255319] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:36.684 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.684 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:36.684 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:36.684 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.684 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:36.684 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:36.684 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:36.684 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.684 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:36.684 [2024-12-09 05:18:13.261899] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x932a50 was disconnected and freed. delete nvme_qpair. 00:23:36.684 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.684 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:36.684 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:36.684 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:36.942 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:36.942 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:36.942 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.942 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:36.942 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:36.942 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.942 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:36.942 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:36.942 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.942 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:36.942 05:18:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:37.877 05:18:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:37.877 05:18:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:37.877 05:18:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.877 05:18:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:37.877 05:18:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.877 05:18:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.877 05:18:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:37.877 05:18:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.877 05:18:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:37.877 05:18:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:39.253 05:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:39.253 05:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.253 05:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:39.253 05:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.253 05:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:39.253 05:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.253 05:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:39.253 05:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.253 05:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:39.253 05:18:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:40.187 05:18:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:40.187 05:18:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.187 05:18:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:40.187 05:18:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:40.187 05:18:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.187 05:18:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:40.187 05:18:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:40.187 05:18:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.187 05:18:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:40.187 05:18:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:41.120 05:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:41.120 05:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.120 05:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:41.120 05:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:41.120 05:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.120 05:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:41.120 05:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:41.120 05:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.120 05:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:41.120 05:18:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:42.055 05:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:42.056 05:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.056 05:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:42.056 05:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:42.056 05:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.056 05:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:42.056 05:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:42.056 05:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.056 [2024-12-09 05:18:18.696981] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:42.056 [2024-12-09 05:18:18.697024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.056 [2024-12-09 05:18:18.697035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.056 [2024-12-09 05:18:18.697050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.056 [2024-12-09 05:18:18.697057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.056 [2024-12-09 05:18:18.697065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.056 [2024-12-09 05:18:18.697072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.056 [2024-12-09 05:18:18.697080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.056 [2024-12-09 05:18:18.697087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.056 [2024-12-09 05:18:18.697094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.056 [2024-12-09 05:18:18.697101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.056 [2024-12-09 05:18:18.697108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90f240 is same with the state(6) to be set 00:23:42.315 05:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:42.315 05:18:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:42.315 [2024-12-09 05:18:18.707005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x90f240 (9): Bad file descriptor 00:23:42.315 [2024-12-09 05:18:18.717040] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:42.315 [2024-12-09 05:18:18.717051] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:42.315 [2024-12-09 05:18:18.717055] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:42.315 [2024-12-09 05:18:18.717060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:42.315 [2024-12-09 05:18:18.717081] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:43.251 05:18:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:43.251 05:18:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.251 05:18:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:43.251 05:18:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:43.251 05:18:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.251 05:18:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:43.251 05:18:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:43.251 [2024-12-09 05:18:19.767064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:43.251 [2024-12-09 05:18:19.767117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x90f240 with addr=10.0.0.2, port=4420 00:23:43.251 [2024-12-09 05:18:19.767135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90f240 is same with the state(6) to be set 00:23:43.251 [2024-12-09 05:18:19.767164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x90f240 (9): Bad file descriptor 00:23:43.251 [2024-12-09 05:18:19.767579] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:23:43.251 [2024-12-09 05:18:19.767608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:43.251 [2024-12-09 05:18:19.767619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:43.251 [2024-12-09 05:18:19.767629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:43.251 [2024-12-09 05:18:19.767639] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:43.251 [2024-12-09 05:18:19.767646] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:43.251 [2024-12-09 05:18:19.767652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:43.251 [2024-12-09 05:18:19.767662] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:43.251 [2024-12-09 05:18:19.767668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:43.251 05:18:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.251 05:18:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:43.251 05:18:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:44.187 [2024-12-09 05:18:20.770149] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:44.187 [2024-12-09 05:18:20.770174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:44.187 [2024-12-09 05:18:20.770187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:44.187 [2024-12-09 05:18:20.770194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:44.187 [2024-12-09 05:18:20.770202] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:23:44.187 [2024-12-09 05:18:20.770209] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:44.187 [2024-12-09 05:18:20.770214] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:44.187 [2024-12-09 05:18:20.770218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:44.187 [2024-12-09 05:18:20.770241] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:44.187 [2024-12-09 05:18:20.770267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.187 [2024-12-09 05:18:20.770278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.187 [2024-12-09 05:18:20.770289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.187 [2024-12-09 05:18:20.770296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.187 [2024-12-09 05:18:20.770308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.187 [2024-12-09 05:18:20.770315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.187 [2024-12-09 05:18:20.770322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.187 [2024-12-09 05:18:20.770328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.187 [2024-12-09 05:18:20.770335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.187 [2024-12-09 05:18:20.770342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.187 [2024-12-09 05:18:20.770350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:23:44.187 [2024-12-09 05:18:20.770430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8fe910 (9): Bad file descriptor 00:23:44.187 [2024-12-09 05:18:20.771443] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:44.187 [2024-12-09 05:18:20.771454] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:23:44.187 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:44.187 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.187 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:44.187 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:44.187 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.187 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:44.187 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:44.187 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.446 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:44.446 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.446 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.446 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:44.446 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:44.446 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.446 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:44.446 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.446 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:44.446 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:44.446 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:44.446 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.446 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:44.446 05:18:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:45.391 05:18:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:45.391 05:18:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.391 05:18:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:45.391 05:18:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.391 05:18:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:45.391 05:18:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:45.391 05:18:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:45.391 05:18:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.391 05:18:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:45.391 05:18:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:46.328 [2024-12-09 05:18:22.828156] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:46.328 [2024-12-09 05:18:22.828173] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:46.328 [2024-12-09 05:18:22.828187] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:46.328 [2024-12-09 05:18:22.914450] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:46.587 05:18:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:46.587 05:18:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.587 05:18:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:46.587 05:18:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.587 05:18:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:46.587 05:18:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:46.587 05:18:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:46.587 05:18:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.587 [2024-12-09 05:18:23.050349] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:23:46.587 [2024-12-09 05:18:23.051075] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x93c4a0:1 started. 00:23:46.587 [2024-12-09 05:18:23.052136] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:46.587 [2024-12-09 05:18:23.052168] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:46.587 [2024-12-09 05:18:23.052187] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:46.587 [2024-12-09 05:18:23.052201] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:46.587 [2024-12-09 05:18:23.052208] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:46.587 [2024-12-09 05:18:23.057099] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x93c4a0 was disconnected and freed. delete nvme_qpair. 00:23:46.587 05:18:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:46.587 05:18:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:47.522 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:47.522 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:47.522 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:47.522 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:47.522 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.522 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:47.522 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:47.522 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.522 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:47.523 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:47.523 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3697571 00:23:47.523 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3697571 ']' 00:23:47.523 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3697571 00:23:47.523 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:47.523 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:47.523 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3697571 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3697571' 00:23:47.782 killing process with pid 3697571 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3697571 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3697571 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:47.782 rmmod nvme_tcp 00:23:47.782 rmmod nvme_fabrics 00:23:47.782 rmmod nvme_keyring 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3697551 ']' 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3697551 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3697551 ']' 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3697551 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:47.782 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3697551 00:23:48.041 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:48.041 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:48.041 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3697551' 00:23:48.041 killing process with pid 3697551 00:23:48.042 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3697551 00:23:48.042 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3697551 00:23:48.042 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:48.042 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:48.042 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:48.042 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:23:48.042 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:23:48.042 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:48.042 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:23:48.042 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:48.042 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:48.042 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.042 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.042 05:18:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.578 05:18:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:50.578 00:23:50.578 real 0m21.340s 00:23:50.578 user 0m26.955s 00:23:50.578 sys 0m5.631s 00:23:50.578 05:18:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:50.578 05:18:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.578 ************************************ 00:23:50.578 END TEST nvmf_discovery_remove_ifc 00:23:50.578 ************************************ 00:23:50.578 05:18:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:50.578 05:18:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:50.578 05:18:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:50.578 05:18:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.578 ************************************ 00:23:50.578 START TEST nvmf_identify_kernel_target 00:23:50.578 ************************************ 00:23:50.578 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:50.578 * Looking for test storage... 00:23:50.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:50.578 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:50.578 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:23:50.578 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:50.578 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:50.578 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:50.578 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:50.578 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:50.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.579 --rc genhtml_branch_coverage=1 00:23:50.579 --rc genhtml_function_coverage=1 00:23:50.579 --rc genhtml_legend=1 00:23:50.579 --rc geninfo_all_blocks=1 00:23:50.579 --rc geninfo_unexecuted_blocks=1 00:23:50.579 00:23:50.579 ' 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:50.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.579 --rc genhtml_branch_coverage=1 00:23:50.579 --rc genhtml_function_coverage=1 00:23:50.579 --rc genhtml_legend=1 00:23:50.579 --rc geninfo_all_blocks=1 00:23:50.579 --rc geninfo_unexecuted_blocks=1 00:23:50.579 00:23:50.579 ' 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:50.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.579 --rc genhtml_branch_coverage=1 00:23:50.579 --rc genhtml_function_coverage=1 00:23:50.579 --rc genhtml_legend=1 00:23:50.579 --rc geninfo_all_blocks=1 00:23:50.579 --rc geninfo_unexecuted_blocks=1 00:23:50.579 00:23:50.579 ' 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:50.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.579 --rc genhtml_branch_coverage=1 00:23:50.579 --rc genhtml_function_coverage=1 00:23:50.579 --rc genhtml_legend=1 00:23:50.579 --rc geninfo_all_blocks=1 00:23:50.579 --rc geninfo_unexecuted_blocks=1 00:23:50.579 00:23:50.579 ' 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:50.579 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:50.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:23:50.580 05:18:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.843 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:55.844 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:55.844 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:55.844 Found net devices under 0000:86:00.0: cvl_0_0 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:55.844 Found net devices under 0000:86:00.1: cvl_0_1 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:55.844 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.102 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.102 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.102 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:56.102 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:56.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:23:56.102 00:23:56.102 --- 10.0.0.2 ping statistics --- 00:23:56.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.102 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:23:56.102 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:23:56.102 00:23:56.102 --- 10.0.0.1 ping statistics --- 00:23:56.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.102 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:23:56.102 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.102 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:23:56.102 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:56.102 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.102 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:56.102 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:56.102 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.102 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:56.102 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:56.102 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:56.102 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:56.102 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:23:56.102 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:56.102 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:56.103 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.103 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.103 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:56.103 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.103 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:56.103 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:56.103 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:56.103 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:56.103 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:56.103 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:56.103 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:56.103 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:56.103 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:56.103 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:56.103 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:23:56.103 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:56.103 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:56.103 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:56.103 05:18:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:58.637 Waiting for block devices as requested 00:23:58.637 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:23:58.895 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:58.895 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:58.895 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:59.153 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:59.153 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:59.153 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:59.153 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:59.411 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:59.411 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:59.411 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:59.411 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:59.669 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:59.669 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:59.669 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:59.926 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:59.926 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:59.926 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:59.926 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:59.926 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:59.926 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:59.926 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:59.926 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:59.926 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:59.926 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:59.926 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:59.927 No valid GPT data, bailing 00:23:59.927 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:59.927 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:59.927 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:59.927 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:59.927 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:23:59.927 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:00.185 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:00.185 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:00.185 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:00.185 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:00.185 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:00.185 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:24:00.185 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:00.185 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:24:00.185 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:24:00.185 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:24:00.185 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:00.185 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:00.185 00:24:00.185 Discovery Log Number of Records 2, Generation counter 2 00:24:00.185 =====Discovery Log Entry 0====== 00:24:00.185 trtype: tcp 00:24:00.185 adrfam: ipv4 00:24:00.185 subtype: current discovery subsystem 00:24:00.185 treq: not specified, sq flow control disable supported 00:24:00.185 portid: 1 00:24:00.185 trsvcid: 4420 00:24:00.185 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:00.185 traddr: 10.0.0.1 00:24:00.185 eflags: none 00:24:00.185 sectype: none 00:24:00.185 =====Discovery Log Entry 1====== 00:24:00.185 trtype: tcp 00:24:00.185 adrfam: ipv4 00:24:00.185 subtype: nvme subsystem 00:24:00.185 treq: not specified, sq flow control disable supported 00:24:00.185 portid: 1 00:24:00.185 trsvcid: 4420 00:24:00.185 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:00.185 traddr: 10.0.0.1 00:24:00.185 eflags: none 00:24:00.185 sectype: none 00:24:00.185 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:00.185 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:00.185 ===================================================== 00:24:00.185 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:00.185 ===================================================== 00:24:00.185 Controller Capabilities/Features 00:24:00.185 ================================ 00:24:00.185 Vendor ID: 0000 00:24:00.185 Subsystem Vendor ID: 0000 00:24:00.185 Serial Number: d8ffb9a08e3816c79c0b 00:24:00.185 Model Number: Linux 00:24:00.185 Firmware Version: 6.8.9-20 00:24:00.185 Recommended Arb Burst: 0 00:24:00.185 IEEE OUI Identifier: 00 00 00 00:24:00.185 Multi-path I/O 00:24:00.185 May have multiple subsystem ports: No 00:24:00.185 May have multiple controllers: No 00:24:00.185 Associated with SR-IOV VF: No 00:24:00.185 Max Data Transfer Size: Unlimited 00:24:00.185 Max Number of Namespaces: 0 00:24:00.185 Max Number of I/O Queues: 1024 00:24:00.185 NVMe Specification Version (VS): 1.3 00:24:00.185 NVMe Specification Version (Identify): 1.3 00:24:00.185 Maximum Queue Entries: 1024 00:24:00.185 Contiguous Queues Required: No 00:24:00.185 Arbitration Mechanisms Supported 00:24:00.185 Weighted Round Robin: Not Supported 00:24:00.185 Vendor Specific: Not Supported 00:24:00.185 Reset Timeout: 7500 ms 00:24:00.185 Doorbell Stride: 4 bytes 00:24:00.185 NVM Subsystem Reset: Not Supported 00:24:00.185 Command Sets Supported 00:24:00.185 NVM Command Set: Supported 00:24:00.185 Boot Partition: Not Supported 00:24:00.185 Memory Page Size Minimum: 4096 bytes 00:24:00.185 Memory Page Size Maximum: 4096 bytes 00:24:00.185 Persistent Memory Region: Not Supported 00:24:00.185 Optional Asynchronous Events Supported 00:24:00.185 Namespace Attribute Notices: Not Supported 00:24:00.185 Firmware Activation Notices: Not Supported 00:24:00.185 ANA Change Notices: Not Supported 00:24:00.185 PLE Aggregate Log Change Notices: Not Supported 00:24:00.185 LBA Status Info Alert Notices: Not Supported 00:24:00.185 EGE Aggregate Log Change Notices: Not Supported 00:24:00.185 Normal NVM Subsystem Shutdown event: Not Supported 00:24:00.185 Zone Descriptor Change Notices: Not Supported 00:24:00.185 Discovery Log Change Notices: Supported 00:24:00.185 Controller Attributes 00:24:00.185 128-bit Host Identifier: Not Supported 00:24:00.185 Non-Operational Permissive Mode: Not Supported 00:24:00.185 NVM Sets: Not Supported 00:24:00.185 Read Recovery Levels: Not Supported 00:24:00.185 Endurance Groups: Not Supported 00:24:00.185 Predictable Latency Mode: Not Supported 00:24:00.185 Traffic Based Keep ALive: Not Supported 00:24:00.185 Namespace Granularity: Not Supported 00:24:00.185 SQ Associations: Not Supported 00:24:00.185 UUID List: Not Supported 00:24:00.185 Multi-Domain Subsystem: Not Supported 00:24:00.185 Fixed Capacity Management: Not Supported 00:24:00.185 Variable Capacity Management: Not Supported 00:24:00.185 Delete Endurance Group: Not Supported 00:24:00.186 Delete NVM Set: Not Supported 00:24:00.186 Extended LBA Formats Supported: Not Supported 00:24:00.186 Flexible Data Placement Supported: Not Supported 00:24:00.186 00:24:00.186 Controller Memory Buffer Support 00:24:00.186 ================================ 00:24:00.186 Supported: No 00:24:00.186 00:24:00.186 Persistent Memory Region Support 00:24:00.186 ================================ 00:24:00.186 Supported: No 00:24:00.186 00:24:00.186 Admin Command Set Attributes 00:24:00.186 ============================ 00:24:00.186 Security Send/Receive: Not Supported 00:24:00.186 Format NVM: Not Supported 00:24:00.186 Firmware Activate/Download: Not Supported 00:24:00.186 Namespace Management: Not Supported 00:24:00.186 Device Self-Test: Not Supported 00:24:00.186 Directives: Not Supported 00:24:00.186 NVMe-MI: Not Supported 00:24:00.186 Virtualization Management: Not Supported 00:24:00.186 Doorbell Buffer Config: Not Supported 00:24:00.186 Get LBA Status Capability: Not Supported 00:24:00.186 Command & Feature Lockdown Capability: Not Supported 00:24:00.186 Abort Command Limit: 1 00:24:00.186 Async Event Request Limit: 1 00:24:00.186 Number of Firmware Slots: N/A 00:24:00.186 Firmware Slot 1 Read-Only: N/A 00:24:00.186 Firmware Activation Without Reset: N/A 00:24:00.186 Multiple Update Detection Support: N/A 00:24:00.186 Firmware Update Granularity: No Information Provided 00:24:00.186 Per-Namespace SMART Log: No 00:24:00.186 Asymmetric Namespace Access Log Page: Not Supported 00:24:00.186 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:00.186 Command Effects Log Page: Not Supported 00:24:00.186 Get Log Page Extended Data: Supported 00:24:00.186 Telemetry Log Pages: Not Supported 00:24:00.186 Persistent Event Log Pages: Not Supported 00:24:00.186 Supported Log Pages Log Page: May Support 00:24:00.186 Commands Supported & Effects Log Page: Not Supported 00:24:00.186 Feature Identifiers & Effects Log Page:May Support 00:24:00.186 NVMe-MI Commands & Effects Log Page: May Support 00:24:00.186 Data Area 4 for Telemetry Log: Not Supported 00:24:00.186 Error Log Page Entries Supported: 1 00:24:00.186 Keep Alive: Not Supported 00:24:00.186 00:24:00.186 NVM Command Set Attributes 00:24:00.186 ========================== 00:24:00.186 Submission Queue Entry Size 00:24:00.186 Max: 1 00:24:00.186 Min: 1 00:24:00.186 Completion Queue Entry Size 00:24:00.186 Max: 1 00:24:00.186 Min: 1 00:24:00.186 Number of Namespaces: 0 00:24:00.186 Compare Command: Not Supported 00:24:00.186 Write Uncorrectable Command: Not Supported 00:24:00.186 Dataset Management Command: Not Supported 00:24:00.186 Write Zeroes Command: Not Supported 00:24:00.186 Set Features Save Field: Not Supported 00:24:00.186 Reservations: Not Supported 00:24:00.186 Timestamp: Not Supported 00:24:00.186 Copy: Not Supported 00:24:00.186 Volatile Write Cache: Not Present 00:24:00.186 Atomic Write Unit (Normal): 1 00:24:00.186 Atomic Write Unit (PFail): 1 00:24:00.186 Atomic Compare & Write Unit: 1 00:24:00.186 Fused Compare & Write: Not Supported 00:24:00.186 Scatter-Gather List 00:24:00.186 SGL Command Set: Supported 00:24:00.186 SGL Keyed: Not Supported 00:24:00.186 SGL Bit Bucket Descriptor: Not Supported 00:24:00.186 SGL Metadata Pointer: Not Supported 00:24:00.186 Oversized SGL: Not Supported 00:24:00.186 SGL Metadata Address: Not Supported 00:24:00.186 SGL Offset: Supported 00:24:00.186 Transport SGL Data Block: Not Supported 00:24:00.186 Replay Protected Memory Block: Not Supported 00:24:00.186 00:24:00.186 Firmware Slot Information 00:24:00.186 ========================= 00:24:00.186 Active slot: 0 00:24:00.186 00:24:00.186 00:24:00.186 Error Log 00:24:00.186 ========= 00:24:00.186 00:24:00.186 Active Namespaces 00:24:00.186 ================= 00:24:00.186 Discovery Log Page 00:24:00.186 ================== 00:24:00.186 Generation Counter: 2 00:24:00.186 Number of Records: 2 00:24:00.186 Record Format: 0 00:24:00.186 00:24:00.186 Discovery Log Entry 0 00:24:00.186 ---------------------- 00:24:00.186 Transport Type: 3 (TCP) 00:24:00.186 Address Family: 1 (IPv4) 00:24:00.186 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:00.186 Entry Flags: 00:24:00.186 Duplicate Returned Information: 0 00:24:00.186 Explicit Persistent Connection Support for Discovery: 0 00:24:00.186 Transport Requirements: 00:24:00.186 Secure Channel: Not Specified 00:24:00.186 Port ID: 1 (0x0001) 00:24:00.186 Controller ID: 65535 (0xffff) 00:24:00.186 Admin Max SQ Size: 32 00:24:00.186 Transport Service Identifier: 4420 00:24:00.186 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:00.186 Transport Address: 10.0.0.1 00:24:00.186 Discovery Log Entry 1 00:24:00.186 ---------------------- 00:24:00.186 Transport Type: 3 (TCP) 00:24:00.186 Address Family: 1 (IPv4) 00:24:00.186 Subsystem Type: 2 (NVM Subsystem) 00:24:00.186 Entry Flags: 00:24:00.186 Duplicate Returned Information: 0 00:24:00.186 Explicit Persistent Connection Support for Discovery: 0 00:24:00.186 Transport Requirements: 00:24:00.186 Secure Channel: Not Specified 00:24:00.186 Port ID: 1 (0x0001) 00:24:00.186 Controller ID: 65535 (0xffff) 00:24:00.186 Admin Max SQ Size: 32 00:24:00.186 Transport Service Identifier: 4420 00:24:00.186 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:00.186 Transport Address: 10.0.0.1 00:24:00.186 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:00.444 get_feature(0x01) failed 00:24:00.444 get_feature(0x02) failed 00:24:00.444 get_feature(0x04) failed 00:24:00.444 ===================================================== 00:24:00.444 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:00.444 ===================================================== 00:24:00.444 Controller Capabilities/Features 00:24:00.444 ================================ 00:24:00.444 Vendor ID: 0000 00:24:00.444 Subsystem Vendor ID: 0000 00:24:00.444 Serial Number: f03fc7915784d94359c5 00:24:00.444 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:00.444 Firmware Version: 6.8.9-20 00:24:00.444 Recommended Arb Burst: 6 00:24:00.444 IEEE OUI Identifier: 00 00 00 00:24:00.444 Multi-path I/O 00:24:00.444 May have multiple subsystem ports: Yes 00:24:00.444 May have multiple controllers: Yes 00:24:00.444 Associated with SR-IOV VF: No 00:24:00.444 Max Data Transfer Size: Unlimited 00:24:00.444 Max Number of Namespaces: 1024 00:24:00.444 Max Number of I/O Queues: 128 00:24:00.444 NVMe Specification Version (VS): 1.3 00:24:00.444 NVMe Specification Version (Identify): 1.3 00:24:00.444 Maximum Queue Entries: 1024 00:24:00.444 Contiguous Queues Required: No 00:24:00.444 Arbitration Mechanisms Supported 00:24:00.444 Weighted Round Robin: Not Supported 00:24:00.444 Vendor Specific: Not Supported 00:24:00.444 Reset Timeout: 7500 ms 00:24:00.444 Doorbell Stride: 4 bytes 00:24:00.444 NVM Subsystem Reset: Not Supported 00:24:00.444 Command Sets Supported 00:24:00.444 NVM Command Set: Supported 00:24:00.444 Boot Partition: Not Supported 00:24:00.444 Memory Page Size Minimum: 4096 bytes 00:24:00.444 Memory Page Size Maximum: 4096 bytes 00:24:00.444 Persistent Memory Region: Not Supported 00:24:00.444 Optional Asynchronous Events Supported 00:24:00.444 Namespace Attribute Notices: Supported 00:24:00.444 Firmware Activation Notices: Not Supported 00:24:00.444 ANA Change Notices: Supported 00:24:00.444 PLE Aggregate Log Change Notices: Not Supported 00:24:00.444 LBA Status Info Alert Notices: Not Supported 00:24:00.444 EGE Aggregate Log Change Notices: Not Supported 00:24:00.444 Normal NVM Subsystem Shutdown event: Not Supported 00:24:00.444 Zone Descriptor Change Notices: Not Supported 00:24:00.444 Discovery Log Change Notices: Not Supported 00:24:00.444 Controller Attributes 00:24:00.444 128-bit Host Identifier: Supported 00:24:00.444 Non-Operational Permissive Mode: Not Supported 00:24:00.444 NVM Sets: Not Supported 00:24:00.444 Read Recovery Levels: Not Supported 00:24:00.444 Endurance Groups: Not Supported 00:24:00.444 Predictable Latency Mode: Not Supported 00:24:00.444 Traffic Based Keep ALive: Supported 00:24:00.444 Namespace Granularity: Not Supported 00:24:00.444 SQ Associations: Not Supported 00:24:00.444 UUID List: Not Supported 00:24:00.444 Multi-Domain Subsystem: Not Supported 00:24:00.444 Fixed Capacity Management: Not Supported 00:24:00.444 Variable Capacity Management: Not Supported 00:24:00.444 Delete Endurance Group: Not Supported 00:24:00.444 Delete NVM Set: Not Supported 00:24:00.444 Extended LBA Formats Supported: Not Supported 00:24:00.445 Flexible Data Placement Supported: Not Supported 00:24:00.445 00:24:00.445 Controller Memory Buffer Support 00:24:00.445 ================================ 00:24:00.445 Supported: No 00:24:00.445 00:24:00.445 Persistent Memory Region Support 00:24:00.445 ================================ 00:24:00.445 Supported: No 00:24:00.445 00:24:00.445 Admin Command Set Attributes 00:24:00.445 ============================ 00:24:00.445 Security Send/Receive: Not Supported 00:24:00.445 Format NVM: Not Supported 00:24:00.445 Firmware Activate/Download: Not Supported 00:24:00.445 Namespace Management: Not Supported 00:24:00.445 Device Self-Test: Not Supported 00:24:00.445 Directives: Not Supported 00:24:00.445 NVMe-MI: Not Supported 00:24:00.445 Virtualization Management: Not Supported 00:24:00.445 Doorbell Buffer Config: Not Supported 00:24:00.445 Get LBA Status Capability: Not Supported 00:24:00.445 Command & Feature Lockdown Capability: Not Supported 00:24:00.445 Abort Command Limit: 4 00:24:00.445 Async Event Request Limit: 4 00:24:00.445 Number of Firmware Slots: N/A 00:24:00.445 Firmware Slot 1 Read-Only: N/A 00:24:00.445 Firmware Activation Without Reset: N/A 00:24:00.445 Multiple Update Detection Support: N/A 00:24:00.445 Firmware Update Granularity: No Information Provided 00:24:00.445 Per-Namespace SMART Log: Yes 00:24:00.445 Asymmetric Namespace Access Log Page: Supported 00:24:00.445 ANA Transition Time : 10 sec 00:24:00.445 00:24:00.445 Asymmetric Namespace Access Capabilities 00:24:00.445 ANA Optimized State : Supported 00:24:00.445 ANA Non-Optimized State : Supported 00:24:00.445 ANA Inaccessible State : Supported 00:24:00.445 ANA Persistent Loss State : Supported 00:24:00.445 ANA Change State : Supported 00:24:00.445 ANAGRPID is not changed : No 00:24:00.445 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:00.445 00:24:00.445 ANA Group Identifier Maximum : 128 00:24:00.445 Number of ANA Group Identifiers : 128 00:24:00.445 Max Number of Allowed Namespaces : 1024 00:24:00.445 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:00.445 Command Effects Log Page: Supported 00:24:00.445 Get Log Page Extended Data: Supported 00:24:00.445 Telemetry Log Pages: Not Supported 00:24:00.445 Persistent Event Log Pages: Not Supported 00:24:00.445 Supported Log Pages Log Page: May Support 00:24:00.445 Commands Supported & Effects Log Page: Not Supported 00:24:00.445 Feature Identifiers & Effects Log Page:May Support 00:24:00.445 NVMe-MI Commands & Effects Log Page: May Support 00:24:00.445 Data Area 4 for Telemetry Log: Not Supported 00:24:00.445 Error Log Page Entries Supported: 128 00:24:00.445 Keep Alive: Supported 00:24:00.445 Keep Alive Granularity: 1000 ms 00:24:00.445 00:24:00.445 NVM Command Set Attributes 00:24:00.445 ========================== 00:24:00.445 Submission Queue Entry Size 00:24:00.445 Max: 64 00:24:00.445 Min: 64 00:24:00.445 Completion Queue Entry Size 00:24:00.445 Max: 16 00:24:00.445 Min: 16 00:24:00.445 Number of Namespaces: 1024 00:24:00.445 Compare Command: Not Supported 00:24:00.445 Write Uncorrectable Command: Not Supported 00:24:00.445 Dataset Management Command: Supported 00:24:00.445 Write Zeroes Command: Supported 00:24:00.445 Set Features Save Field: Not Supported 00:24:00.445 Reservations: Not Supported 00:24:00.445 Timestamp: Not Supported 00:24:00.445 Copy: Not Supported 00:24:00.445 Volatile Write Cache: Present 00:24:00.445 Atomic Write Unit (Normal): 1 00:24:00.445 Atomic Write Unit (PFail): 1 00:24:00.445 Atomic Compare & Write Unit: 1 00:24:00.445 Fused Compare & Write: Not Supported 00:24:00.445 Scatter-Gather List 00:24:00.445 SGL Command Set: Supported 00:24:00.445 SGL Keyed: Not Supported 00:24:00.445 SGL Bit Bucket Descriptor: Not Supported 00:24:00.445 SGL Metadata Pointer: Not Supported 00:24:00.445 Oversized SGL: Not Supported 00:24:00.445 SGL Metadata Address: Not Supported 00:24:00.445 SGL Offset: Supported 00:24:00.445 Transport SGL Data Block: Not Supported 00:24:00.445 Replay Protected Memory Block: Not Supported 00:24:00.445 00:24:00.445 Firmware Slot Information 00:24:00.445 ========================= 00:24:00.445 Active slot: 0 00:24:00.445 00:24:00.445 Asymmetric Namespace Access 00:24:00.445 =========================== 00:24:00.445 Change Count : 0 00:24:00.445 Number of ANA Group Descriptors : 1 00:24:00.445 ANA Group Descriptor : 0 00:24:00.445 ANA Group ID : 1 00:24:00.445 Number of NSID Values : 1 00:24:00.445 Change Count : 0 00:24:00.445 ANA State : 1 00:24:00.445 Namespace Identifier : 1 00:24:00.445 00:24:00.445 Commands Supported and Effects 00:24:00.445 ============================== 00:24:00.445 Admin Commands 00:24:00.445 -------------- 00:24:00.445 Get Log Page (02h): Supported 00:24:00.445 Identify (06h): Supported 00:24:00.445 Abort (08h): Supported 00:24:00.445 Set Features (09h): Supported 00:24:00.445 Get Features (0Ah): Supported 00:24:00.445 Asynchronous Event Request (0Ch): Supported 00:24:00.445 Keep Alive (18h): Supported 00:24:00.445 I/O Commands 00:24:00.445 ------------ 00:24:00.445 Flush (00h): Supported 00:24:00.445 Write (01h): Supported LBA-Change 00:24:00.445 Read (02h): Supported 00:24:00.445 Write Zeroes (08h): Supported LBA-Change 00:24:00.445 Dataset Management (09h): Supported 00:24:00.445 00:24:00.445 Error Log 00:24:00.445 ========= 00:24:00.445 Entry: 0 00:24:00.445 Error Count: 0x3 00:24:00.445 Submission Queue Id: 0x0 00:24:00.445 Command Id: 0x5 00:24:00.445 Phase Bit: 0 00:24:00.445 Status Code: 0x2 00:24:00.445 Status Code Type: 0x0 00:24:00.445 Do Not Retry: 1 00:24:00.445 Error Location: 0x28 00:24:00.445 LBA: 0x0 00:24:00.445 Namespace: 0x0 00:24:00.445 Vendor Log Page: 0x0 00:24:00.445 ----------- 00:24:00.445 Entry: 1 00:24:00.445 Error Count: 0x2 00:24:00.445 Submission Queue Id: 0x0 00:24:00.445 Command Id: 0x5 00:24:00.445 Phase Bit: 0 00:24:00.445 Status Code: 0x2 00:24:00.445 Status Code Type: 0x0 00:24:00.445 Do Not Retry: 1 00:24:00.445 Error Location: 0x28 00:24:00.445 LBA: 0x0 00:24:00.445 Namespace: 0x0 00:24:00.445 Vendor Log Page: 0x0 00:24:00.445 ----------- 00:24:00.445 Entry: 2 00:24:00.445 Error Count: 0x1 00:24:00.445 Submission Queue Id: 0x0 00:24:00.445 Command Id: 0x4 00:24:00.445 Phase Bit: 0 00:24:00.445 Status Code: 0x2 00:24:00.445 Status Code Type: 0x0 00:24:00.445 Do Not Retry: 1 00:24:00.445 Error Location: 0x28 00:24:00.445 LBA: 0x0 00:24:00.445 Namespace: 0x0 00:24:00.445 Vendor Log Page: 0x0 00:24:00.445 00:24:00.445 Number of Queues 00:24:00.445 ================ 00:24:00.445 Number of I/O Submission Queues: 128 00:24:00.445 Number of I/O Completion Queues: 128 00:24:00.445 00:24:00.445 ZNS Specific Controller Data 00:24:00.445 ============================ 00:24:00.445 Zone Append Size Limit: 0 00:24:00.445 00:24:00.445 00:24:00.445 Active Namespaces 00:24:00.445 ================= 00:24:00.445 get_feature(0x05) failed 00:24:00.445 Namespace ID:1 00:24:00.445 Command Set Identifier: NVM (00h) 00:24:00.445 Deallocate: Supported 00:24:00.445 Deallocated/Unwritten Error: Not Supported 00:24:00.445 Deallocated Read Value: Unknown 00:24:00.445 Deallocate in Write Zeroes: Not Supported 00:24:00.445 Deallocated Guard Field: 0xFFFF 00:24:00.445 Flush: Supported 00:24:00.445 Reservation: Not Supported 00:24:00.445 Namespace Sharing Capabilities: Multiple Controllers 00:24:00.445 Size (in LBAs): 1953525168 (931GiB) 00:24:00.445 Capacity (in LBAs): 1953525168 (931GiB) 00:24:00.445 Utilization (in LBAs): 1953525168 (931GiB) 00:24:00.445 UUID: 4cb99b4a-dfda-4050-8337-1f0f2d3d476d 00:24:00.445 Thin Provisioning: Not Supported 00:24:00.445 Per-NS Atomic Units: Yes 00:24:00.445 Atomic Boundary Size (Normal): 0 00:24:00.445 Atomic Boundary Size (PFail): 0 00:24:00.445 Atomic Boundary Offset: 0 00:24:00.445 NGUID/EUI64 Never Reused: No 00:24:00.445 ANA group ID: 1 00:24:00.445 Namespace Write Protected: No 00:24:00.445 Number of LBA Formats: 1 00:24:00.445 Current LBA Format: LBA Format #00 00:24:00.445 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:00.445 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:00.445 rmmod nvme_tcp 00:24:00.445 rmmod nvme_fabrics 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.445 05:18:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.461 05:18:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:02.461 05:18:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:02.461 05:18:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:02.461 05:18:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:02.461 05:18:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:02.461 05:18:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:02.461 05:18:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:02.461 05:18:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:02.461 05:18:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:02.461 05:18:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:02.461 05:18:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:05.743 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:05.743 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:05.743 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:05.743 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:05.743 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:05.743 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:05.743 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:05.743 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:05.743 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:05.743 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:05.743 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:05.743 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:05.743 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:05.743 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:05.743 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:05.743 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:06.310 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:06.310 00:24:06.310 real 0m16.096s 00:24:06.310 user 0m4.127s 00:24:06.310 sys 0m8.382s 00:24:06.310 05:18:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:06.310 05:18:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.310 ************************************ 00:24:06.310 END TEST nvmf_identify_kernel_target 00:24:06.310 ************************************ 00:24:06.310 05:18:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:06.310 05:18:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:06.310 05:18:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:06.310 05:18:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.567 ************************************ 00:24:06.567 START TEST nvmf_auth_host 00:24:06.567 ************************************ 00:24:06.567 05:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:06.567 * Looking for test storage... 00:24:06.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:06.567 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:06.567 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:06.567 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:06.567 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:06.567 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.567 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.567 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.567 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.567 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.567 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.567 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.567 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.567 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.567 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.567 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.567 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:06.567 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:06.567 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:06.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.568 --rc genhtml_branch_coverage=1 00:24:06.568 --rc genhtml_function_coverage=1 00:24:06.568 --rc genhtml_legend=1 00:24:06.568 --rc geninfo_all_blocks=1 00:24:06.568 --rc geninfo_unexecuted_blocks=1 00:24:06.568 00:24:06.568 ' 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:06.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.568 --rc genhtml_branch_coverage=1 00:24:06.568 --rc genhtml_function_coverage=1 00:24:06.568 --rc genhtml_legend=1 00:24:06.568 --rc geninfo_all_blocks=1 00:24:06.568 --rc geninfo_unexecuted_blocks=1 00:24:06.568 00:24:06.568 ' 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:06.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.568 --rc genhtml_branch_coverage=1 00:24:06.568 --rc genhtml_function_coverage=1 00:24:06.568 --rc genhtml_legend=1 00:24:06.568 --rc geninfo_all_blocks=1 00:24:06.568 --rc geninfo_unexecuted_blocks=1 00:24:06.568 00:24:06.568 ' 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:06.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.568 --rc genhtml_branch_coverage=1 00:24:06.568 --rc genhtml_function_coverage=1 00:24:06.568 --rc genhtml_legend=1 00:24:06.568 --rc geninfo_all_blocks=1 00:24:06.568 --rc geninfo_unexecuted_blocks=1 00:24:06.568 00:24:06.568 ' 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:06.568 05:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:13.130 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:13.130 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:13.130 Found net devices under 0000:86:00.0: cvl_0_0 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:13.130 Found net devices under 0000:86:00.1: cvl_0_1 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:13.130 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:13.131 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:13.131 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:13.131 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:13.131 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:13.131 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:13.131 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:13.131 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:13.131 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:13.131 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:13.131 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:13.131 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:13.131 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:13.131 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:13.131 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:13.131 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:13.131 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:13.131 05:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:13.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:13.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:24:13.131 00:24:13.131 --- 10.0.0.2 ping statistics --- 00:24:13.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.131 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:13.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:13.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:24:13.131 00:24:13.131 --- 10.0.0.1 ping statistics --- 00:24:13.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.131 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3709791 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3709791 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3709791 ']' 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=60993b52987de7605f7e43657a51ec53 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.2vf 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 60993b52987de7605f7e43657a51ec53 0 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 60993b52987de7605f7e43657a51ec53 0 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=60993b52987de7605f7e43657a51ec53 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.2vf 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.2vf 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.2vf 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7e4cf5392b4cbedf4166182a7878c55fdacf1b338e1e78e77cb4dfb79439243b 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.MP6 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7e4cf5392b4cbedf4166182a7878c55fdacf1b338e1e78e77cb4dfb79439243b 3 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7e4cf5392b4cbedf4166182a7878c55fdacf1b338e1e78e77cb4dfb79439243b 3 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7e4cf5392b4cbedf4166182a7878c55fdacf1b338e1e78e77cb4dfb79439243b 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.MP6 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.MP6 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.MP6 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cfc5dbdec6d3ac64b1f82bc730bf50ec318e2116bcb7aa1c 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.bEM 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cfc5dbdec6d3ac64b1f82bc730bf50ec318e2116bcb7aa1c 0 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cfc5dbdec6d3ac64b1f82bc730bf50ec318e2116bcb7aa1c 0 00:24:13.131 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cfc5dbdec6d3ac64b1f82bc730bf50ec318e2116bcb7aa1c 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.bEM 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.bEM 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.bEM 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d581b261b767f3de0d9568e330cc1335667fcedbd13c4477 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.eNa 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d581b261b767f3de0d9568e330cc1335667fcedbd13c4477 2 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d581b261b767f3de0d9568e330cc1335667fcedbd13c4477 2 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d581b261b767f3de0d9568e330cc1335667fcedbd13c4477 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.eNa 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.eNa 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.eNa 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=aa0a0534c18ad1c1fee25282b7a5c659 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.h5T 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key aa0a0534c18ad1c1fee25282b7a5c659 1 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 aa0a0534c18ad1c1fee25282b7a5c659 1 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=aa0a0534c18ad1c1fee25282b7a5c659 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.h5T 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.h5T 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.h5T 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a6dcba55a21d70dcecf974370f3e140e 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Y4m 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a6dcba55a21d70dcecf974370f3e140e 1 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a6dcba55a21d70dcecf974370f3e140e 1 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a6dcba55a21d70dcecf974370f3e140e 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Y4m 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Y4m 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Y4m 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a61d09c613a59ce6ad2b11cf421349ff096f7b0ddb377df2 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.xNW 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a61d09c613a59ce6ad2b11cf421349ff096f7b0ddb377df2 2 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a61d09c613a59ce6ad2b11cf421349ff096f7b0ddb377df2 2 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a61d09c613a59ce6ad2b11cf421349ff096f7b0ddb377df2 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:13.132 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.xNW 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.xNW 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.xNW 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4874b625a43d1723ad69d0b8e2f62846 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.4xK 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4874b625a43d1723ad69d0b8e2f62846 0 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4874b625a43d1723ad69d0b8e2f62846 0 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4874b625a43d1723ad69d0b8e2f62846 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.4xK 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.4xK 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.4xK 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c2b0fe8b2769ef042662cb398633dfb4a9bcc057697c46d56fa64e75ed04d456 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.iCy 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c2b0fe8b2769ef042662cb398633dfb4a9bcc057697c46d56fa64e75ed04d456 3 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c2b0fe8b2769ef042662cb398633dfb4a9bcc057697c46d56fa64e75ed04d456 3 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c2b0fe8b2769ef042662cb398633dfb4a9bcc057697c46d56fa64e75ed04d456 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.iCy 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.iCy 00:24:13.391 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.iCy 00:24:13.392 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:13.392 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3709791 00:24:13.392 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3709791 ']' 00:24:13.392 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.392 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:13.392 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.392 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:13.392 05:18:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.650 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.650 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:13.650 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:13.650 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.2vf 00:24:13.650 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.MP6 ]] 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MP6 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.bEM 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.eNa ]] 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eNa 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.h5T 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Y4m ]] 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Y4m 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.xNW 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.4xK ]] 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.4xK 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.iCy 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:13.651 05:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:16.932 Waiting for block devices as requested 00:24:16.932 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:16.932 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:16.932 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:16.932 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:16.932 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:16.932 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:16.932 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:16.932 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:16.932 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:17.191 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:17.191 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:17.191 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:17.191 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:17.448 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:17.448 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:17.448 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:17.707 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:18.273 No valid GPT data, bailing 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:18.273 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:18.273 00:24:18.273 Discovery Log Number of Records 2, Generation counter 2 00:24:18.273 =====Discovery Log Entry 0====== 00:24:18.273 trtype: tcp 00:24:18.273 adrfam: ipv4 00:24:18.273 subtype: current discovery subsystem 00:24:18.273 treq: not specified, sq flow control disable supported 00:24:18.273 portid: 1 00:24:18.273 trsvcid: 4420 00:24:18.273 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:18.273 traddr: 10.0.0.1 00:24:18.274 eflags: none 00:24:18.274 sectype: none 00:24:18.274 =====Discovery Log Entry 1====== 00:24:18.274 trtype: tcp 00:24:18.274 adrfam: ipv4 00:24:18.274 subtype: nvme subsystem 00:24:18.274 treq: not specified, sq flow control disable supported 00:24:18.274 portid: 1 00:24:18.274 trsvcid: 4420 00:24:18.274 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:18.274 traddr: 10.0.0.1 00:24:18.274 eflags: none 00:24:18.274 sectype: none 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: ]] 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.274 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.532 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.532 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:18.533 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.533 05:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.533 nvme0n1 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: ]] 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.533 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.791 nvme0n1 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: ]] 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.791 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.049 nvme0n1 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: ]] 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.049 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.307 nvme0n1 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: ]] 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.307 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.308 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.308 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.308 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.308 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.308 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.308 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.308 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.308 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.308 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.308 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.308 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.308 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:19.308 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.308 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.565 nvme0n1 00:24:19.565 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.565 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.565 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.565 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.565 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.565 05:18:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.565 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.565 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.565 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.565 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.565 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.565 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.565 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:19.565 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.565 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:19.565 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:19.565 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:19.565 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:19.565 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:19.565 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.565 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:19.565 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:19.565 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.566 nvme0n1 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.566 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: ]] 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.823 nvme0n1 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.823 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: ]] 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.080 nvme0n1 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.080 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: ]] 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.336 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.337 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.337 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.337 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.337 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.337 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.337 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.337 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.337 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.337 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.337 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.337 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.337 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.337 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:20.337 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.337 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.337 nvme0n1 00:24:20.337 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.337 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.337 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.337 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.337 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.337 05:18:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: ]] 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.593 nvme0n1 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.593 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.850 nvme0n1 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.850 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: ]] 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.107 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.365 nvme0n1 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: ]] 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.365 05:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.622 nvme0n1 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: ]] 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.622 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.623 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.623 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.623 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.623 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.623 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.623 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.623 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:21.623 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.623 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.879 nvme0n1 00:24:21.879 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.879 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.879 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.879 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.879 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.879 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.879 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.879 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.879 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.879 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: ]] 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.135 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.392 nvme0n1 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.392 05:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.649 nvme0n1 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: ]] 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.649 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.213 nvme0n1 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: ]] 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.213 05:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.471 nvme0n1 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: ]] 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:23.471 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.472 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.472 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.472 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.472 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.472 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.472 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.472 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.472 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.472 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.472 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.472 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.472 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.472 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.472 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:23.472 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.472 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.039 nvme0n1 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: ]] 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.039 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.297 nvme0n1 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.556 05:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.556 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.556 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.557 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.557 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.557 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.557 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.557 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.557 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.557 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.557 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.557 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.557 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.557 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:24.557 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.557 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.814 nvme0n1 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: ]] 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.815 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.073 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.073 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.073 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.073 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.073 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.073 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.073 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.073 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.073 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.073 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.073 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.073 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.073 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.073 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.073 05:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.639 nvme0n1 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: ]] 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.639 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.206 nvme0n1 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: ]] 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.206 05:19:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.787 nvme0n1 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: ]] 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.787 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:26.788 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.788 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.370 nvme0n1 00:24:27.370 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.370 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.370 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.370 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.370 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.370 05:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.370 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.370 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.370 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.370 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.629 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.195 nvme0n1 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: ]] 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:28.195 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:28.196 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.196 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:28.196 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.196 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.196 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.196 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.196 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.196 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.196 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.196 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.196 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.196 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.196 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.196 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.196 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.196 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.196 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:28.196 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.196 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.454 nvme0n1 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: ]] 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.454 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.455 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.455 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.455 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.455 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:28.455 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.455 05:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.455 nvme0n1 00:24:28.455 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.455 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.455 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.455 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.455 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.455 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: ]] 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.711 nvme0n1 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.711 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: ]] 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.969 nvme0n1 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.969 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.226 nvme0n1 00:24:29.226 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.226 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.226 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.226 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.226 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.226 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.226 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.226 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.226 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.226 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.226 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.226 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:29.226 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: ]] 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.227 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.485 nvme0n1 00:24:29.485 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.485 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.485 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.485 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.485 05:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: ]] 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.485 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.742 nvme0n1 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: ]] 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.742 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.743 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.743 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.743 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.743 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.743 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.743 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.743 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.743 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.743 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.743 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.743 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.743 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:29.743 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.743 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.999 nvme0n1 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: ]] 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.999 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.257 nvme0n1 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.257 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.515 nvme0n1 00:24:30.515 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.515 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.515 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.515 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.515 05:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: ]] 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.515 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.772 nvme0n1 00:24:30.772 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.772 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.772 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.772 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.772 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.772 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.772 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.772 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.772 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.772 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.772 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.772 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.772 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: ]] 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.773 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.029 nvme0n1 00:24:31.029 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.029 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.029 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.029 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.029 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: ]] 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.287 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.544 nvme0n1 00:24:31.544 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.544 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.544 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.544 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.544 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.544 05:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.544 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.544 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.544 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.544 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.544 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.544 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.544 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:31.544 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.544 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:31.544 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:31.544 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:31.544 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:31.544 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:31.544 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:31.544 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:31.544 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:31.544 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: ]] 00:24:31.544 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:31.544 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.545 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.817 nvme0n1 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.817 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.075 nvme0n1 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: ]] 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.642 nvme0n1 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: ]] 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.642 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.643 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.643 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.643 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.643 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.643 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:32.643 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.643 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.939 nvme0n1 00:24:32.939 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.939 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.939 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.939 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.939 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.939 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.939 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.939 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.939 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.939 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: ]] 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.940 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.218 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.218 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.218 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.218 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.218 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.218 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.218 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.218 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.218 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.218 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.218 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.218 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.218 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:33.218 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.218 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.477 nvme0n1 00:24:33.477 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.477 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.477 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.477 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.477 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.477 05:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: ]] 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.477 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.044 nvme0n1 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.045 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.303 nvme0n1 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: ]] 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:34.303 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.562 05:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.129 nvme0n1 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: ]] 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.129 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:35.130 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.130 05:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.698 nvme0n1 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: ]] 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.698 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.266 nvme0n1 00:24:36.266 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.266 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.266 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.266 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.266 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.266 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.266 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.266 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.266 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.266 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.266 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.266 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.266 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:36.266 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.266 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:36.266 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:36.266 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:36.266 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:36.267 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:36.267 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:36.267 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:36.267 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:36.267 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: ]] 00:24:36.267 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:36.267 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:36.267 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.267 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:36.267 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:36.267 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:36.267 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.267 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:36.267 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.267 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.526 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.526 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.526 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.526 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.526 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.526 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.526 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.526 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.526 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.526 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.526 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.526 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.526 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:36.526 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.526 05:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.095 nvme0n1 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.095 05:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.663 nvme0n1 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: ]] 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.663 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.922 nvme0n1 00:24:37.922 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.922 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.922 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.922 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.922 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.922 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.922 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: ]] 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.923 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.182 nvme0n1 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: ]] 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.182 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.441 nvme0n1 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: ]] 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.441 05:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.441 nvme0n1 00:24:38.441 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.441 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.441 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.441 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.441 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.441 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.698 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:38.699 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.699 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:38.699 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:38.699 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:38.699 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:38.699 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.699 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.699 nvme0n1 00:24:38.699 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.699 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.699 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.699 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.699 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.699 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.699 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.699 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.699 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.699 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: ]] 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.957 nvme0n1 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.957 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: ]] 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.216 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.217 nvme0n1 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: ]] 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.217 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.476 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.476 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.476 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:39.476 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.476 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.476 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.476 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.476 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:39.476 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.476 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:39.476 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:39.476 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:39.476 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:39.476 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.476 05:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.476 nvme0n1 00:24:39.476 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.476 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.476 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.476 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.476 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.476 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.476 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.476 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.476 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.476 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.476 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.476 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.476 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:39.476 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.476 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:39.734 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:39.734 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:39.734 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:39.734 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:39.734 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:39.734 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:39.734 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:39.734 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: ]] 00:24:39.734 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:39.734 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:39.734 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.735 nvme0n1 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.735 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.993 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.993 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.993 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:39.993 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.993 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.993 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.993 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.993 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:39.993 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.993 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:39.993 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:39.993 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:39.993 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:39.993 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.993 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.993 nvme0n1 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: ]] 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.994 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:40.252 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.252 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.252 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.252 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.252 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:40.252 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:40.252 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:40.252 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.252 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.252 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:40.252 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.252 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:40.252 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:40.252 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:40.252 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:40.252 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.252 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.510 nvme0n1 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: ]] 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:40.510 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:40.511 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.511 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:40.511 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.511 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.511 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.511 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.511 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:40.511 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:40.511 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:40.511 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.511 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.511 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:40.511 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.511 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:40.511 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:40.511 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:40.511 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:40.511 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.511 05:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.769 nvme0n1 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: ]] 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.769 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.027 nvme0n1 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: ]] 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.027 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.285 nvme0n1 00:24:41.285 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.285 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.285 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.285 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.285 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.285 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.285 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.285 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.286 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.286 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:41.544 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.545 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:41.545 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:41.545 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:41.545 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:41.545 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.545 05:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.802 nvme0n1 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: ]] 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.803 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.061 nvme0n1 00:24:42.061 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.061 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.061 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.061 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.061 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.061 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: ]] 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.319 05:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.577 nvme0n1 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: ]] 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.577 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.143 nvme0n1 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: ]] 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.143 05:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.708 nvme0n1 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:43.708 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.709 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.967 nvme0n1 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjA5OTNiNTI5ODdkZTc2MDVmN2U0MzY1N2E1MWVjNTO34m6R: 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: ]] 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2U0Y2Y1MzkyYjRjYmVkZjQxNjYxODJhNzg3OGM1NWZkYWNmMWIzMzhlMWU3OGU3N2NiNGRmYjc5NDM5MjQzYg/3kLE=: 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.967 05:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.535 nvme0n1 00:24:44.535 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.535 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.535 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.535 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.535 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.794 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.794 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.794 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.794 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.794 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.794 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.794 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: ]] 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.795 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.364 nvme0n1 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: ]] 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.364 05:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.933 nvme0n1 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTYxZDA5YzYxM2E1OWNlNmFkMmIxMWNmNDIxMzQ5ZmYwOTZmN2IwZGRiMzc3ZGYyZk+LUg==: 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: ]] 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3NGI2MjVhNDNkMTcyM2FkNjlkMGI4ZTJmNjI4NDablSQh: 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.933 05:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.871 nvme0n1 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzJiMGZlOGIyNzY5ZWYwNDI2NjJjYjM5ODYzM2RmYjRhOWJjYzA1NzY5N2M0NmQ1NmZhNjRlNzVlZDA0ZDQ1Nrd1K/0=: 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.871 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.441 nvme0n1 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: ]] 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.441 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.441 request: 00:24:47.441 { 00:24:47.441 "name": "nvme0", 00:24:47.441 "trtype": "tcp", 00:24:47.441 "traddr": "10.0.0.1", 00:24:47.441 "adrfam": "ipv4", 00:24:47.441 "trsvcid": "4420", 00:24:47.441 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:47.441 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:47.441 "prchk_reftag": false, 00:24:47.441 "prchk_guard": false, 00:24:47.441 "hdgst": false, 00:24:47.441 "ddgst": false, 00:24:47.441 "allow_unrecognized_csi": false, 00:24:47.442 "method": "bdev_nvme_attach_controller", 00:24:47.442 "req_id": 1 00:24:47.442 } 00:24:47.442 Got JSON-RPC error response 00:24:47.442 response: 00:24:47.442 { 00:24:47.442 "code": -5, 00:24:47.442 "message": "Input/output error" 00:24:47.442 } 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.442 05:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.442 request: 00:24:47.442 { 00:24:47.442 "name": "nvme0", 00:24:47.442 "trtype": "tcp", 00:24:47.442 "traddr": "10.0.0.1", 00:24:47.442 "adrfam": "ipv4", 00:24:47.442 "trsvcid": "4420", 00:24:47.442 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:47.442 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:47.442 "prchk_reftag": false, 00:24:47.442 "prchk_guard": false, 00:24:47.442 "hdgst": false, 00:24:47.442 "ddgst": false, 00:24:47.442 "dhchap_key": "key2", 00:24:47.442 "allow_unrecognized_csi": false, 00:24:47.442 "method": "bdev_nvme_attach_controller", 00:24:47.442 "req_id": 1 00:24:47.442 } 00:24:47.442 Got JSON-RPC error response 00:24:47.442 response: 00:24:47.442 { 00:24:47.442 "code": -5, 00:24:47.442 "message": "Input/output error" 00:24:47.442 } 00:24:47.442 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:47.442 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:47.442 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:47.442 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:47.442 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:47.442 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.442 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:47.442 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.442 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.702 request: 00:24:47.702 { 00:24:47.702 "name": "nvme0", 00:24:47.702 "trtype": "tcp", 00:24:47.702 "traddr": "10.0.0.1", 00:24:47.702 "adrfam": "ipv4", 00:24:47.702 "trsvcid": "4420", 00:24:47.702 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:47.702 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:47.702 "prchk_reftag": false, 00:24:47.702 "prchk_guard": false, 00:24:47.702 "hdgst": false, 00:24:47.702 "ddgst": false, 00:24:47.702 "dhchap_key": "key1", 00:24:47.702 "dhchap_ctrlr_key": "ckey2", 00:24:47.702 "allow_unrecognized_csi": false, 00:24:47.702 "method": "bdev_nvme_attach_controller", 00:24:47.702 "req_id": 1 00:24:47.702 } 00:24:47.702 Got JSON-RPC error response 00:24:47.702 response: 00:24:47.702 { 00:24:47.702 "code": -5, 00:24:47.702 "message": "Input/output error" 00:24:47.702 } 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.702 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.703 nvme0n1 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: ]] 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.703 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.969 request: 00:24:47.969 { 00:24:47.969 "name": "nvme0", 00:24:47.969 "dhchap_key": "key1", 00:24:47.969 "dhchap_ctrlr_key": "ckey2", 00:24:47.969 "method": "bdev_nvme_set_keys", 00:24:47.969 "req_id": 1 00:24:47.969 } 00:24:47.969 Got JSON-RPC error response 00:24:47.969 response: 00:24:47.969 { 00:24:47.969 "code": -13, 00:24:47.969 "message": "Permission denied" 00:24:47.969 } 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.969 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:47.970 05:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:49.347 05:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.347 05:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:49.347 05:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.347 05:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.347 05:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.347 05:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:49.347 05:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ZjNWRiZGVjNmQzYWM2NGIxZjgyYmM3MzBiZjUwZWMzMThlMjExNmJjYjdhYTFjUuDkQw==: 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: ]] 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDU4MWIyNjFiNzY3ZjNkZTBkOTU2OGUzMzBjYzEzMzU2NjdmY2VkYmQxM2M0NDc3TZWsvw==: 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.284 nvme0n1 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEwYTA1MzRjMThhZDFjMWZlZTI1MjgyYjdhNWM2NTlwh3Hp: 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: ]] 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZkY2JhNTVhMjFkNzBkY2VjZjk3NDM3MGYzZTE0MGW0HNLj: 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.284 request: 00:24:50.284 { 00:24:50.284 "name": "nvme0", 00:24:50.284 "dhchap_key": "key2", 00:24:50.284 "dhchap_ctrlr_key": "ckey1", 00:24:50.284 "method": "bdev_nvme_set_keys", 00:24:50.284 "req_id": 1 00:24:50.284 } 00:24:50.284 Got JSON-RPC error response 00:24:50.284 response: 00:24:50.284 { 00:24:50.284 "code": -13, 00:24:50.284 "message": "Permission denied" 00:24:50.284 } 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:24:50.284 05:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:24:51.659 05:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.659 05:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:51.659 05:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.659 05:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.659 05:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.659 05:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:24:51.659 05:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:24:51.659 05:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:24:51.659 05:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:51.659 05:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:51.659 05:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:24:51.659 05:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:51.659 05:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:24:51.659 05:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:51.659 05:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:51.659 rmmod nvme_tcp 00:24:51.659 rmmod nvme_fabrics 00:24:51.659 05:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3709791 ']' 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3709791 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3709791 ']' 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3709791 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3709791 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3709791' 00:24:51.659 killing process with pid 3709791 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3709791 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3709791 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.659 05:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.189 05:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:54.189 05:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:54.189 05:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:54.189 05:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:54.189 05:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:54.189 05:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:24:54.189 05:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:54.189 05:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:54.189 05:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:54.189 05:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:54.189 05:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:54.189 05:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:54.189 05:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:56.114 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:56.114 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:56.114 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:56.114 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:56.114 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:56.114 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:56.373 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:56.373 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:56.373 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:56.373 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:56.373 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:56.373 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:56.373 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:56.373 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:56.373 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:56.373 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:57.309 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:57.309 05:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.2vf /tmp/spdk.key-null.bEM /tmp/spdk.key-sha256.h5T /tmp/spdk.key-sha384.xNW /tmp/spdk.key-sha512.iCy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:57.309 05:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:59.848 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:24:59.848 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:59.848 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:24:59.848 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:24:59.848 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:24:59.848 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:24:59.848 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:24:59.848 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:24:59.848 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:24:59.848 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:24:59.848 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:24:59.848 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:24:59.848 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:24:59.848 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:24:59.848 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:24:59.848 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:24:59.848 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:00.107 00:25:00.107 real 0m53.649s 00:25:00.107 user 0m48.746s 00:25:00.107 sys 0m12.092s 00:25:00.107 05:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:00.107 05:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.107 ************************************ 00:25:00.107 END TEST nvmf_auth_host 00:25:00.107 ************************************ 00:25:00.107 05:19:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:00.107 05:19:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:00.107 05:19:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:00.107 05:19:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:00.107 05:19:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.107 ************************************ 00:25:00.107 START TEST nvmf_digest 00:25:00.107 ************************************ 00:25:00.107 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:00.365 * Looking for test storage... 00:25:00.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:00.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.365 --rc genhtml_branch_coverage=1 00:25:00.365 --rc genhtml_function_coverage=1 00:25:00.365 --rc genhtml_legend=1 00:25:00.365 --rc geninfo_all_blocks=1 00:25:00.365 --rc geninfo_unexecuted_blocks=1 00:25:00.365 00:25:00.365 ' 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:00.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.365 --rc genhtml_branch_coverage=1 00:25:00.365 --rc genhtml_function_coverage=1 00:25:00.365 --rc genhtml_legend=1 00:25:00.365 --rc geninfo_all_blocks=1 00:25:00.365 --rc geninfo_unexecuted_blocks=1 00:25:00.365 00:25:00.365 ' 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:00.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.365 --rc genhtml_branch_coverage=1 00:25:00.365 --rc genhtml_function_coverage=1 00:25:00.365 --rc genhtml_legend=1 00:25:00.365 --rc geninfo_all_blocks=1 00:25:00.365 --rc geninfo_unexecuted_blocks=1 00:25:00.365 00:25:00.365 ' 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:00.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.365 --rc genhtml_branch_coverage=1 00:25:00.365 --rc genhtml_function_coverage=1 00:25:00.365 --rc genhtml_legend=1 00:25:00.365 --rc geninfo_all_blocks=1 00:25:00.365 --rc geninfo_unexecuted_blocks=1 00:25:00.365 00:25:00.365 ' 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.365 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:00.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:25:00.366 05:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:05.625 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.625 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.625 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.625 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.625 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.625 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.625 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.625 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.625 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.625 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:25:05.625 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.625 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:25:05.625 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:05.883 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:05.883 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:05.883 Found net devices under 0000:86:00.0: cvl_0_0 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:05.883 Found net devices under 0000:86:00.1: cvl_0_1 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:25:05.883 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:05.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:25:05.884 00:25:05.884 --- 10.0.0.2 ping statistics --- 00:25:05.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.884 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:25:05.884 00:25:05.884 --- 10.0.0.1 ping statistics --- 00:25:05.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.884 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:05.884 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:06.142 ************************************ 00:25:06.142 START TEST nvmf_digest_clean 00:25:06.142 ************************************ 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3723534 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3723534 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3723534 ']' 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:06.142 [2024-12-09 05:19:42.622674] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:25:06.142 [2024-12-09 05:19:42.622718] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.142 [2024-12-09 05:19:42.691075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.142 [2024-12-09 05:19:42.731687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.142 [2024-12-09 05:19:42.731722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.142 [2024-12-09 05:19:42.731730] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.142 [2024-12-09 05:19:42.731736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.142 [2024-12-09 05:19:42.731741] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.142 [2024-12-09 05:19:42.732322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:06.142 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:06.400 null0 00:25:06.400 [2024-12-09 05:19:42.902114] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.400 [2024-12-09 05:19:42.926319] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3723661 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3723661 /var/tmp/bperf.sock 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3723661 ']' 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:06.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.400 05:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:06.400 [2024-12-09 05:19:42.981378] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:25:06.400 [2024-12-09 05:19:42.981423] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3723661 ] 00:25:06.656 [2024-12-09 05:19:43.045997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.656 [2024-12-09 05:19:43.086670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.656 05:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.656 05:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:06.656 05:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:06.656 05:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:06.656 05:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:06.914 05:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:06.914 05:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:07.172 nvme0n1 00:25:07.172 05:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:07.172 05:19:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:07.429 Running I/O for 2 seconds... 00:25:09.345 25107.00 IOPS, 98.07 MiB/s [2024-12-09T04:19:45.991Z] 25581.00 IOPS, 99.93 MiB/s 00:25:09.345 Latency(us) 00:25:09.345 [2024-12-09T04:19:45.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.346 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:09.346 nvme0n1 : 2.00 25562.74 99.85 0.00 0.00 5000.09 2478.97 10998.65 00:25:09.346 [2024-12-09T04:19:45.992Z] =================================================================================================================== 00:25:09.346 [2024-12-09T04:19:45.992Z] Total : 25562.74 99.85 0.00 0.00 5000.09 2478.97 10998.65 00:25:09.346 { 00:25:09.346 "results": [ 00:25:09.346 { 00:25:09.346 "job": "nvme0n1", 00:25:09.346 "core_mask": "0x2", 00:25:09.346 "workload": "randread", 00:25:09.346 "status": "finished", 00:25:09.346 "queue_depth": 128, 00:25:09.346 "io_size": 4096, 00:25:09.346 "runtime": 2.003189, 00:25:09.346 "iops": 25562.740210733984, 00:25:09.346 "mibps": 99.85445394817962, 00:25:09.346 "io_failed": 0, 00:25:09.346 "io_timeout": 0, 00:25:09.346 "avg_latency_us": 5000.09079383678, 00:25:09.346 "min_latency_us": 2478.9704347826087, 00:25:09.346 "max_latency_us": 10998.650434782608 00:25:09.346 } 00:25:09.346 ], 00:25:09.346 "core_count": 1 00:25:09.346 } 00:25:09.346 05:19:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:09.346 05:19:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:09.346 05:19:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:09.346 05:19:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:09.346 05:19:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:09.346 | select(.opcode=="crc32c") 00:25:09.346 | "\(.module_name) \(.executed)"' 00:25:09.603 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:09.603 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:09.603 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:09.603 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:09.603 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3723661 00:25:09.603 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3723661 ']' 00:25:09.603 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3723661 00:25:09.603 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:09.603 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.603 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3723661 00:25:09.603 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:09.603 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:09.603 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3723661' 00:25:09.603 killing process with pid 3723661 00:25:09.603 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3723661 00:25:09.603 Received shutdown signal, test time was about 2.000000 seconds 00:25:09.603 00:25:09.603 Latency(us) 00:25:09.603 [2024-12-09T04:19:46.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.603 [2024-12-09T04:19:46.249Z] =================================================================================================================== 00:25:09.603 [2024-12-09T04:19:46.249Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:09.603 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3723661 00:25:09.953 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:09.953 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:09.953 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:09.953 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:09.953 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:09.953 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:09.953 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:09.953 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3724251 00:25:09.953 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3724251 /var/tmp/bperf.sock 00:25:09.953 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:09.953 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3724251 ']' 00:25:09.954 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:09.954 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:09.954 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:09.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:09.954 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:09.954 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:09.954 [2024-12-09 05:19:46.418864] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:25:09.954 [2024-12-09 05:19:46.418913] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724251 ] 00:25:09.954 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:09.954 Zero copy mechanism will not be used. 00:25:09.954 [2024-12-09 05:19:46.482801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.954 [2024-12-09 05:19:46.525634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.954 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.954 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:09.954 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:09.954 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:09.954 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:10.212 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:10.212 05:19:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:10.470 nvme0n1 00:25:10.470 05:19:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:10.470 05:19:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:10.728 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:10.728 Zero copy mechanism will not be used. 00:25:10.728 Running I/O for 2 seconds... 00:25:12.594 4033.00 IOPS, 504.12 MiB/s [2024-12-09T04:19:49.240Z] 4044.50 IOPS, 505.56 MiB/s 00:25:12.594 Latency(us) 00:25:12.594 [2024-12-09T04:19:49.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.594 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:12.594 nvme0n1 : 2.00 4048.39 506.05 0.00 0.00 3949.36 737.28 7807.33 00:25:12.594 [2024-12-09T04:19:49.240Z] =================================================================================================================== 00:25:12.594 [2024-12-09T04:19:49.240Z] Total : 4048.39 506.05 0.00 0.00 3949.36 737.28 7807.33 00:25:12.594 { 00:25:12.594 "results": [ 00:25:12.594 { 00:25:12.594 "job": "nvme0n1", 00:25:12.594 "core_mask": "0x2", 00:25:12.594 "workload": "randread", 00:25:12.594 "status": "finished", 00:25:12.594 "queue_depth": 16, 00:25:12.594 "io_size": 131072, 00:25:12.594 "runtime": 2.002031, 00:25:12.594 "iops": 4048.3888611115412, 00:25:12.594 "mibps": 506.04860763894266, 00:25:12.594 "io_failed": 0, 00:25:12.594 "io_timeout": 0, 00:25:12.594 "avg_latency_us": 3949.361815733713, 00:25:12.594 "min_latency_us": 737.28, 00:25:12.594 "max_latency_us": 7807.332173913043 00:25:12.594 } 00:25:12.594 ], 00:25:12.594 "core_count": 1 00:25:12.594 } 00:25:12.594 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:12.594 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:12.594 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:12.594 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:12.594 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:12.594 | select(.opcode=="crc32c") 00:25:12.594 | "\(.module_name) \(.executed)"' 00:25:12.851 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:12.851 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:12.851 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:12.851 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:12.851 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3724251 00:25:12.851 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3724251 ']' 00:25:12.851 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3724251 00:25:12.851 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:12.851 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.851 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3724251 00:25:12.851 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:12.851 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:12.851 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3724251' 00:25:12.851 killing process with pid 3724251 00:25:12.851 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3724251 00:25:12.851 Received shutdown signal, test time was about 2.000000 seconds 00:25:12.851 00:25:12.851 Latency(us) 00:25:12.851 [2024-12-09T04:19:49.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.851 [2024-12-09T04:19:49.497Z] =================================================================================================================== 00:25:12.851 [2024-12-09T04:19:49.497Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:12.851 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3724251 00:25:13.167 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:13.167 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:13.167 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:13.167 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:13.167 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:13.167 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:13.167 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:13.167 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3724722 00:25:13.167 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3724722 /var/tmp/bperf.sock 00:25:13.167 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:13.167 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3724722 ']' 00:25:13.167 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:13.167 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.167 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:13.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:13.167 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.167 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:13.167 [2024-12-09 05:19:49.660174] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:25:13.167 [2024-12-09 05:19:49.660234] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724722 ] 00:25:13.167 [2024-12-09 05:19:49.727325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.425 [2024-12-09 05:19:49.765405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.425 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.425 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:13.425 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:13.425 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:13.425 05:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:13.425 05:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:13.425 05:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:13.990 nvme0n1 00:25:13.990 05:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:13.990 05:19:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:13.990 Running I/O for 2 seconds... 00:25:15.854 26670.00 IOPS, 104.18 MiB/s [2024-12-09T04:19:52.500Z] 26755.00 IOPS, 104.51 MiB/s 00:25:15.854 Latency(us) 00:25:15.854 [2024-12-09T04:19:52.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.854 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:15.854 nvme0n1 : 2.01 26757.10 104.52 0.00 0.00 4774.97 3490.50 10998.65 00:25:15.854 [2024-12-09T04:19:52.500Z] =================================================================================================================== 00:25:15.854 [2024-12-09T04:19:52.500Z] Total : 26757.10 104.52 0.00 0.00 4774.97 3490.50 10998.65 00:25:15.854 { 00:25:15.854 "results": [ 00:25:15.854 { 00:25:15.854 "job": "nvme0n1", 00:25:15.854 "core_mask": "0x2", 00:25:15.854 "workload": "randwrite", 00:25:15.854 "status": "finished", 00:25:15.854 "queue_depth": 128, 00:25:15.854 "io_size": 4096, 00:25:15.854 "runtime": 2.005823, 00:25:15.854 "iops": 26757.09671292033, 00:25:15.854 "mibps": 104.51990903484504, 00:25:15.854 "io_failed": 0, 00:25:15.854 "io_timeout": 0, 00:25:15.854 "avg_latency_us": 4774.966625610616, 00:25:15.854 "min_latency_us": 3490.504347826087, 00:25:15.854 "max_latency_us": 10998.650434782608 00:25:15.854 } 00:25:15.854 ], 00:25:15.854 "core_count": 1 00:25:15.854 } 00:25:15.854 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:15.854 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:15.854 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:15.854 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:15.854 | select(.opcode=="crc32c") 00:25:15.854 | "\(.module_name) \(.executed)"' 00:25:15.854 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:16.113 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:16.113 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:16.113 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:16.113 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:16.113 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3724722 00:25:16.113 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3724722 ']' 00:25:16.113 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3724722 00:25:16.113 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:16.113 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:16.113 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3724722 00:25:16.113 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:16.113 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:16.113 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3724722' 00:25:16.113 killing process with pid 3724722 00:25:16.113 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3724722 00:25:16.113 Received shutdown signal, test time was about 2.000000 seconds 00:25:16.113 00:25:16.113 Latency(us) 00:25:16.113 [2024-12-09T04:19:52.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.113 [2024-12-09T04:19:52.759Z] =================================================================================================================== 00:25:16.113 [2024-12-09T04:19:52.759Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:16.113 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3724722 00:25:16.371 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:16.371 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:16.371 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:16.371 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:16.371 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:16.371 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:16.371 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:16.371 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3725301 00:25:16.371 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3725301 /var/tmp/bperf.sock 00:25:16.371 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:16.371 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3725301 ']' 00:25:16.371 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:16.371 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.371 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:16.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:16.371 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.371 05:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:16.371 [2024-12-09 05:19:52.971276] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:25:16.371 [2024-12-09 05:19:52.971326] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3725301 ] 00:25:16.371 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:16.371 Zero copy mechanism will not be used. 00:25:16.629 [2024-12-09 05:19:53.036996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.629 [2024-12-09 05:19:53.079902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.629 05:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.629 05:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:16.629 05:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:16.629 05:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:16.629 05:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:16.888 05:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:16.888 05:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:17.454 nvme0n1 00:25:17.454 05:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:17.454 05:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:17.454 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:17.454 Zero copy mechanism will not be used. 00:25:17.454 Running I/O for 2 seconds... 00:25:19.323 6092.00 IOPS, 761.50 MiB/s [2024-12-09T04:19:55.969Z] 5974.00 IOPS, 746.75 MiB/s 00:25:19.323 Latency(us) 00:25:19.323 [2024-12-09T04:19:55.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.323 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:19.323 nvme0n1 : 2.01 5967.62 745.95 0.00 0.00 2675.93 1966.08 12822.26 00:25:19.323 [2024-12-09T04:19:55.969Z] =================================================================================================================== 00:25:19.323 [2024-12-09T04:19:55.969Z] Total : 5967.62 745.95 0.00 0.00 2675.93 1966.08 12822.26 00:25:19.323 { 00:25:19.323 "results": [ 00:25:19.323 { 00:25:19.323 "job": "nvme0n1", 00:25:19.323 "core_mask": "0x2", 00:25:19.323 "workload": "randwrite", 00:25:19.323 "status": "finished", 00:25:19.323 "queue_depth": 16, 00:25:19.323 "io_size": 131072, 00:25:19.323 "runtime": 2.005488, 00:25:19.323 "iops": 5967.624837446048, 00:25:19.323 "mibps": 745.953104680756, 00:25:19.323 "io_failed": 0, 00:25:19.323 "io_timeout": 0, 00:25:19.323 "avg_latency_us": 2675.9261567077424, 00:25:19.323 "min_latency_us": 1966.08, 00:25:19.323 "max_latency_us": 12822.260869565218 00:25:19.323 } 00:25:19.323 ], 00:25:19.323 "core_count": 1 00:25:19.323 } 00:25:19.323 05:19:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:19.323 05:19:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:19.323 05:19:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:19.323 05:19:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:19.323 05:19:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:19.323 | select(.opcode=="crc32c") 00:25:19.323 | "\(.module_name) \(.executed)"' 00:25:19.581 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:19.581 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:19.581 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:19.581 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:19.581 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3725301 00:25:19.581 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3725301 ']' 00:25:19.582 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3725301 00:25:19.582 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:19.582 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:19.582 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3725301 00:25:19.582 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:19.582 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:19.582 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3725301' 00:25:19.582 killing process with pid 3725301 00:25:19.582 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3725301 00:25:19.582 Received shutdown signal, test time was about 2.000000 seconds 00:25:19.582 00:25:19.582 Latency(us) 00:25:19.582 [2024-12-09T04:19:56.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.582 [2024-12-09T04:19:56.228Z] =================================================================================================================== 00:25:19.582 [2024-12-09T04:19:56.228Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:19.582 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3725301 00:25:19.840 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3723534 00:25:19.840 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3723534 ']' 00:25:19.840 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3723534 00:25:19.840 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:19.840 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:19.840 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3723534 00:25:19.840 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:19.840 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:19.840 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3723534' 00:25:19.840 killing process with pid 3723534 00:25:19.840 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3723534 00:25:19.840 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3723534 00:25:20.099 00:25:20.099 real 0m14.049s 00:25:20.099 user 0m26.866s 00:25:20.099 sys 0m4.315s 00:25:20.099 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:20.099 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:20.099 ************************************ 00:25:20.099 END TEST nvmf_digest_clean 00:25:20.099 ************************************ 00:25:20.099 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:20.099 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:20.099 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:20.099 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:20.099 ************************************ 00:25:20.099 START TEST nvmf_digest_error 00:25:20.099 ************************************ 00:25:20.099 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:25:20.099 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:20.099 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:20.099 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:20.099 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:20.099 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3725909 00:25:20.099 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3725909 00:25:20.099 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:20.099 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3725909 ']' 00:25:20.099 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.099 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.099 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.099 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.099 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:20.358 [2024-12-09 05:19:56.746921] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:25:20.358 [2024-12-09 05:19:56.746971] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.358 [2024-12-09 05:19:56.817839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.358 [2024-12-09 05:19:56.858378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.358 [2024-12-09 05:19:56.858413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.358 [2024-12-09 05:19:56.858422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.358 [2024-12-09 05:19:56.858429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.358 [2024-12-09 05:19:56.858434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.358 [2024-12-09 05:19:56.858968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.358 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:20.358 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:20.358 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:20.358 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:20.358 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:20.358 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.358 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:20.358 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.358 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:20.358 [2024-12-09 05:19:56.939446] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:20.358 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.358 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:20.358 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:20.358 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.358 05:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:20.615 null0 00:25:20.615 [2024-12-09 05:19:57.035362] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.615 [2024-12-09 05:19:57.059568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.615 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.615 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:20.615 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:20.615 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:20.615 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:20.615 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:20.615 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3725947 00:25:20.615 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3725947 /var/tmp/bperf.sock 00:25:20.615 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:20.615 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3725947 ']' 00:25:20.615 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:20.615 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.615 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:20.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:20.615 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.615 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:20.615 [2024-12-09 05:19:57.101626] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:25:20.615 [2024-12-09 05:19:57.101666] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3725947 ] 00:25:20.615 [2024-12-09 05:19:57.162139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.615 [2024-12-09 05:19:57.206054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.873 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:20.873 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:20.873 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:20.873 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:20.873 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:20.873 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.873 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:20.873 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.873 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:20.873 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:21.439 nvme0n1 00:25:21.439 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:21.439 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.439 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:21.439 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.439 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:21.439 05:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:21.439 Running I/O for 2 seconds... 00:25:21.439 [2024-12-09 05:19:57.952480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.439 [2024-12-09 05:19:57.952514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.439 [2024-12-09 05:19:57.952526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.439 [2024-12-09 05:19:57.965774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.439 [2024-12-09 05:19:57.965799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.439 [2024-12-09 05:19:57.965809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.439 [2024-12-09 05:19:57.974713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.439 [2024-12-09 05:19:57.974736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.439 [2024-12-09 05:19:57.974745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.439 [2024-12-09 05:19:57.986763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.439 [2024-12-09 05:19:57.986784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.439 [2024-12-09 05:19:57.986793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.439 [2024-12-09 05:19:57.995368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.439 [2024-12-09 05:19:57.995389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.439 [2024-12-09 05:19:57.995398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.439 [2024-12-09 05:19:58.007773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.439 [2024-12-09 05:19:58.007795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.439 [2024-12-09 05:19:58.007803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.439 [2024-12-09 05:19:58.018304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.439 [2024-12-09 05:19:58.018328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.439 [2024-12-09 05:19:58.018337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.439 [2024-12-09 05:19:58.027583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.439 [2024-12-09 05:19:58.027603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.439 [2024-12-09 05:19:58.027611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.439 [2024-12-09 05:19:58.036477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.439 [2024-12-09 05:19:58.036497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.439 [2024-12-09 05:19:58.036505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.439 [2024-12-09 05:19:58.046962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.439 [2024-12-09 05:19:58.046982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.439 [2024-12-09 05:19:58.046991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.439 [2024-12-09 05:19:58.059066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.439 [2024-12-09 05:19:58.059086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.439 [2024-12-09 05:19:58.059094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.439 [2024-12-09 05:19:58.068672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.439 [2024-12-09 05:19:58.068692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.439 [2024-12-09 05:19:58.068700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.439 [2024-12-09 05:19:58.080878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.439 [2024-12-09 05:19:58.080899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.439 [2024-12-09 05:19:58.080908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.696 [2024-12-09 05:19:58.094125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.696 [2024-12-09 05:19:58.094145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.696 [2024-12-09 05:19:58.094153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.696 [2024-12-09 05:19:58.104690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.696 [2024-12-09 05:19:58.104710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.697 [2024-12-09 05:19:58.104718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.697 [2024-12-09 05:19:58.113825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.697 [2024-12-09 05:19:58.113846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.697 [2024-12-09 05:19:58.113854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.697 [2024-12-09 05:19:58.124142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.697 [2024-12-09 05:19:58.124162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.697 [2024-12-09 05:19:58.124169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.697 [2024-12-09 05:19:58.133026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.697 [2024-12-09 05:19:58.133045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.697 [2024-12-09 05:19:58.133053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.697 [2024-12-09 05:19:58.142711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.697 [2024-12-09 05:19:58.142731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.697 [2024-12-09 05:19:58.142739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.697 [2024-12-09 05:19:58.152070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.697 [2024-12-09 05:19:58.152090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.697 [2024-12-09 05:19:58.152098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.697 [2024-12-09 05:19:58.164285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.697 [2024-12-09 05:19:58.164305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.697 [2024-12-09 05:19:58.164313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.697 [2024-12-09 05:19:58.173405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.697 [2024-12-09 05:19:58.173426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.697 [2024-12-09 05:19:58.173434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.697 [2024-12-09 05:19:58.186407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.697 [2024-12-09 05:19:58.186428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.697 [2024-12-09 05:19:58.186435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.697 [2024-12-09 05:19:58.198507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.697 [2024-12-09 05:19:58.198527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.697 [2024-12-09 05:19:58.198539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.697 [2024-12-09 05:19:58.211273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.697 [2024-12-09 05:19:58.211294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.697 [2024-12-09 05:19:58.211302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.697 [2024-12-09 05:19:58.220478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.697 [2024-12-09 05:19:58.220498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.697 [2024-12-09 05:19:58.220507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.697 [2024-12-09 05:19:58.233372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.697 [2024-12-09 05:19:58.233393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.697 [2024-12-09 05:19:58.233401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.697 [2024-12-09 05:19:58.246371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.697 [2024-12-09 05:19:58.246392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.697 [2024-12-09 05:19:58.246400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.697 [2024-12-09 05:19:58.259177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.697 [2024-12-09 05:19:58.259197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.697 [2024-12-09 05:19:58.259205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.697 [2024-12-09 05:19:58.271869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.697 [2024-12-09 05:19:58.271890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.697 [2024-12-09 05:19:58.271898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.697 [2024-12-09 05:19:58.284427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.697 [2024-12-09 05:19:58.284447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.697 [2024-12-09 05:19:58.284455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.697 [2024-12-09 05:19:58.297091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.697 [2024-12-09 05:19:58.297111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.697 [2024-12-09 05:19:58.297119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.697 [2024-12-09 05:19:58.310438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.697 [2024-12-09 05:19:58.310459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.697 [2024-12-09 05:19:58.310467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.697 [2024-12-09 05:19:58.322056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.697 [2024-12-09 05:19:58.322076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.697 [2024-12-09 05:19:58.322084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.697 [2024-12-09 05:19:58.331453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.697 [2024-12-09 05:19:58.331473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.697 [2024-12-09 05:19:58.331481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.341048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.341069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.341077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.352922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.352942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.352950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.361400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.361420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.361428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.374193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.374214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.374222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.386856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.386878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.386886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.396547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.396568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.396584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.405543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.405563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.405572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.415275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.415296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.415304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.426324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.426344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.426352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.434668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.434688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.434696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.445144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.445164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.445173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.455499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.455518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.455526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.464325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.464346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.464354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.477480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.477501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.477509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.489878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.489903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.489911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.499392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.499414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.499423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.511103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.511125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.511134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.521433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.521455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.521463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.530028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.530049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.530057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.540222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.540242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.540250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.550968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.550989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.551004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.562762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.955 [2024-12-09 05:19:58.562783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.955 [2024-12-09 05:19:58.562791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.955 [2024-12-09 05:19:58.571243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.956 [2024-12-09 05:19:58.571264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.956 [2024-12-09 05:19:58.571271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.956 [2024-12-09 05:19:58.583731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.956 [2024-12-09 05:19:58.583752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.956 [2024-12-09 05:19:58.583760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.956 [2024-12-09 05:19:58.596975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:21.956 [2024-12-09 05:19:58.596996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.956 [2024-12-09 05:19:58.597010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.213 [2024-12-09 05:19:58.605667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.213 [2024-12-09 05:19:58.605688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.213 [2024-12-09 05:19:58.605696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.213 [2024-12-09 05:19:58.616047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.213 [2024-12-09 05:19:58.616068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.213 [2024-12-09 05:19:58.616076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.213 [2024-12-09 05:19:58.626566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.213 [2024-12-09 05:19:58.626586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.213 [2024-12-09 05:19:58.626594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.213 [2024-12-09 05:19:58.636041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.213 [2024-12-09 05:19:58.636062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.213 [2024-12-09 05:19:58.636070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.213 [2024-12-09 05:19:58.644955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.213 [2024-12-09 05:19:58.644975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.213 [2024-12-09 05:19:58.644983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.213 [2024-12-09 05:19:58.656835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.213 [2024-12-09 05:19:58.656855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.213 [2024-12-09 05:19:58.656862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.213 [2024-12-09 05:19:58.665344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.213 [2024-12-09 05:19:58.665364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.213 [2024-12-09 05:19:58.665375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.213 [2024-12-09 05:19:58.676273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.213 [2024-12-09 05:19:58.676293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.213 [2024-12-09 05:19:58.676302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.213 [2024-12-09 05:19:58.684563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.213 [2024-12-09 05:19:58.684583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.213 [2024-12-09 05:19:58.684592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.213 [2024-12-09 05:19:58.695791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.213 [2024-12-09 05:19:58.695812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.213 [2024-12-09 05:19:58.695821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.213 [2024-12-09 05:19:58.706180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.213 [2024-12-09 05:19:58.706203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.213 [2024-12-09 05:19:58.706211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.213 [2024-12-09 05:19:58.714577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.213 [2024-12-09 05:19:58.714597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.213 [2024-12-09 05:19:58.714605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.213 [2024-12-09 05:19:58.726626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.213 [2024-12-09 05:19:58.726647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.213 [2024-12-09 05:19:58.726655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.213 [2024-12-09 05:19:58.737055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.213 [2024-12-09 05:19:58.737076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.213 [2024-12-09 05:19:58.737084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.213 [2024-12-09 05:19:58.745757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.213 [2024-12-09 05:19:58.745779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.213 [2024-12-09 05:19:58.745787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.213 [2024-12-09 05:19:58.757390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.213 [2024-12-09 05:19:58.757411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.213 [2024-12-09 05:19:58.757419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.213 [2024-12-09 05:19:58.766104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.213 [2024-12-09 05:19:58.766125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.213 [2024-12-09 05:19:58.766133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.213 [2024-12-09 05:19:58.776696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.213 [2024-12-09 05:19:58.776717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.213 [2024-12-09 05:19:58.776724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.213 [2024-12-09 05:19:58.788659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.213 [2024-12-09 05:19:58.788679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.214 [2024-12-09 05:19:58.788687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.214 [2024-12-09 05:19:58.797543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.214 [2024-12-09 05:19:58.797563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.214 [2024-12-09 05:19:58.797572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.214 [2024-12-09 05:19:58.809340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.214 [2024-12-09 05:19:58.809361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.214 [2024-12-09 05:19:58.809369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.214 [2024-12-09 05:19:58.817618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.214 [2024-12-09 05:19:58.817638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.214 [2024-12-09 05:19:58.817647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.214 [2024-12-09 05:19:58.828167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.214 [2024-12-09 05:19:58.828187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.214 [2024-12-09 05:19:58.828195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.214 [2024-12-09 05:19:58.839898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.214 [2024-12-09 05:19:58.839919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.214 [2024-12-09 05:19:58.839931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.214 [2024-12-09 05:19:58.848540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.214 [2024-12-09 05:19:58.848561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.214 [2024-12-09 05:19:58.848569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.471 [2024-12-09 05:19:58.861051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.471 [2024-12-09 05:19:58.861072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.471 [2024-12-09 05:19:58.861080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.471 [2024-12-09 05:19:58.872741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.471 [2024-12-09 05:19:58.872762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.471 [2024-12-09 05:19:58.872770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.471 [2024-12-09 05:19:58.881441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.471 [2024-12-09 05:19:58.881461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.471 [2024-12-09 05:19:58.881469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.471 [2024-12-09 05:19:58.892946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.471 [2024-12-09 05:19:58.892968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.471 [2024-12-09 05:19:58.892976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.471 [2024-12-09 05:19:58.905239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.471 [2024-12-09 05:19:58.905259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.471 [2024-12-09 05:19:58.905267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.471 [2024-12-09 05:19:58.916405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.471 [2024-12-09 05:19:58.916425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.471 [2024-12-09 05:19:58.916433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.471 [2024-12-09 05:19:58.930635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.471 [2024-12-09 05:19:58.930658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.471 [2024-12-09 05:19:58.930666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.471 23693.00 IOPS, 92.55 MiB/s [2024-12-09T04:19:59.117Z] [2024-12-09 05:19:58.940951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.471 [2024-12-09 05:19:58.940974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.471 [2024-12-09 05:19:58.940982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.471 [2024-12-09 05:19:58.951682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.471 [2024-12-09 05:19:58.951702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.471 [2024-12-09 05:19:58.951710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.471 [2024-12-09 05:19:58.960371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.471 [2024-12-09 05:19:58.960391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.471 [2024-12-09 05:19:58.960399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.471 [2024-12-09 05:19:58.972295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.471 [2024-12-09 05:19:58.972315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.471 [2024-12-09 05:19:58.972323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.471 [2024-12-09 05:19:58.982001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.471 [2024-12-09 05:19:58.982022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.471 [2024-12-09 05:19:58.982030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.471 [2024-12-09 05:19:58.990902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.471 [2024-12-09 05:19:58.990922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.471 [2024-12-09 05:19:58.990931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.471 [2024-12-09 05:19:59.001284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.471 [2024-12-09 05:19:59.001306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.471 [2024-12-09 05:19:59.001315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.472 [2024-12-09 05:19:59.011748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.472 [2024-12-09 05:19:59.011769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.472 [2024-12-09 05:19:59.011777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.472 [2024-12-09 05:19:59.020946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.472 [2024-12-09 05:19:59.020966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.472 [2024-12-09 05:19:59.020975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.472 [2024-12-09 05:19:59.030537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.472 [2024-12-09 05:19:59.030556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.472 [2024-12-09 05:19:59.030564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.472 [2024-12-09 05:19:59.041118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.472 [2024-12-09 05:19:59.041138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.472 [2024-12-09 05:19:59.041147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.472 [2024-12-09 05:19:59.049865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.472 [2024-12-09 05:19:59.049886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.472 [2024-12-09 05:19:59.049894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.472 [2024-12-09 05:19:59.062063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.472 [2024-12-09 05:19:59.062083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.472 [2024-12-09 05:19:59.062091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.472 [2024-12-09 05:19:59.074744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.472 [2024-12-09 05:19:59.074764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.472 [2024-12-09 05:19:59.074772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.472 [2024-12-09 05:19:59.086019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.472 [2024-12-09 05:19:59.086039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.472 [2024-12-09 05:19:59.086047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.472 [2024-12-09 05:19:59.097878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.472 [2024-12-09 05:19:59.097897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.472 [2024-12-09 05:19:59.097905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.472 [2024-12-09 05:19:59.109901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.472 [2024-12-09 05:19:59.109921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.472 [2024-12-09 05:19:59.109929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.729 [2024-12-09 05:19:59.118882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.729 [2024-12-09 05:19:59.118902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.729 [2024-12-09 05:19:59.118915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.729 [2024-12-09 05:19:59.128856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.729 [2024-12-09 05:19:59.128875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.729 [2024-12-09 05:19:59.128883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.138159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.138178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.138186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.149642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.149663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.149671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.159472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.159492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.159500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.171757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.171776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.171784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.183948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.183968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.183977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.194983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.195008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.195016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.207764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.207784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.207792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.217014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.217034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.217042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.230184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.230205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.230212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.238849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.238869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.238877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.249507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.249527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.249536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.260537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.260557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.260565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.270564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.270584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.270592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.279497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.279516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.279524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.291777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.291797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.291805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.304229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.304250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.304261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.315704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.315724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.315732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.324779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.324799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.324807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.337449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.337469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.337477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.345699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.345719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.345727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.357794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.357814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.357822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.730 [2024-12-09 05:19:59.370527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.730 [2024-12-09 05:19:59.370548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.730 [2024-12-09 05:19:59.370556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.988 [2024-12-09 05:19:59.379141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.988 [2024-12-09 05:19:59.379161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.988 [2024-12-09 05:19:59.379169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.988 [2024-12-09 05:19:59.390908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.390928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.390936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.403808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.403833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.403841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.415304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.415327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.415335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.426627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.426648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.426656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.435774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.435794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.435802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.445311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.445332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.445340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.456686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.456706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.456715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.466946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.466966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.466974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.477972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.477991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.478004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.489389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.489408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.489416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.498273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.498293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.498300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.509900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.509921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.509930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.521138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.521158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.521167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.529255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.529274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.529282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.539949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.539969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.539977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.549625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.549646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.549654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.559980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.560007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.560016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.569812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.569832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.569840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.578497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.578516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.578527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.588293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.588313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.588321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.598349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.598368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.598376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.608265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.608285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.608293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.618164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.618184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.618192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.989 [2024-12-09 05:19:59.626542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:22.989 [2024-12-09 05:19:59.626562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.989 [2024-12-09 05:19:59.626570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.639172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.639192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.248 [2024-12-09 05:19:59.639200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.650501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.650521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.248 [2024-12-09 05:19:59.650529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.659599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.659620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.248 [2024-12-09 05:19:59.659628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.672684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.672705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.248 [2024-12-09 05:19:59.672713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.684325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.684346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.248 [2024-12-09 05:19:59.684354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.692871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.692892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.248 [2024-12-09 05:19:59.692900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.703610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.703630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.248 [2024-12-09 05:19:59.703639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.713272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.713292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.248 [2024-12-09 05:19:59.713300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.723182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.723202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.248 [2024-12-09 05:19:59.723210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.732990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.733015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.248 [2024-12-09 05:19:59.733023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.742196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.742215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.248 [2024-12-09 05:19:59.742223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.751992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.752017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.248 [2024-12-09 05:19:59.752031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.761057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.761077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.248 [2024-12-09 05:19:59.761085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.772280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.772300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.248 [2024-12-09 05:19:59.772308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.781094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.781113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.248 [2024-12-09 05:19:59.781121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.790510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.790530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.248 [2024-12-09 05:19:59.790538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.801005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.801025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.248 [2024-12-09 05:19:59.801033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.809266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.809285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.248 [2024-12-09 05:19:59.809293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.820458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.820478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.248 [2024-12-09 05:19:59.820485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.829954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.829974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.248 [2024-12-09 05:19:59.829983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.838256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.838279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.248 [2024-12-09 05:19:59.838287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.248 [2024-12-09 05:19:59.849265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.248 [2024-12-09 05:19:59.849285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.249 [2024-12-09 05:19:59.849293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.249 [2024-12-09 05:19:59.862298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.249 [2024-12-09 05:19:59.862319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.249 [2024-12-09 05:19:59.862327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.249 [2024-12-09 05:19:59.873373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.249 [2024-12-09 05:19:59.873392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.249 [2024-12-09 05:19:59.873399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.249 [2024-12-09 05:19:59.882345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.249 [2024-12-09 05:19:59.882364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.249 [2024-12-09 05:19:59.882371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.507 [2024-12-09 05:19:59.894550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.507 [2024-12-09 05:19:59.894571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.507 [2024-12-09 05:19:59.894580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.507 [2024-12-09 05:19:59.906532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.507 [2024-12-09 05:19:59.906554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.507 [2024-12-09 05:19:59.906562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.507 [2024-12-09 05:19:59.915681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.507 [2024-12-09 05:19:59.915702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.507 [2024-12-09 05:19:59.915710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.507 [2024-12-09 05:19:59.928650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.507 [2024-12-09 05:19:59.928672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.507 [2024-12-09 05:19:59.928681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.507 [2024-12-09 05:19:59.940761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16b56b0) 00:25:23.507 [2024-12-09 05:19:59.940783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.507 [2024-12-09 05:19:59.940792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.507 23980.00 IOPS, 93.67 MiB/s 00:25:23.507 Latency(us) 00:25:23.507 [2024-12-09T04:20:00.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.507 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:23.507 nvme0n1 : 2.00 24006.56 93.78 0.00 0.00 5326.96 2607.19 16640.45 00:25:23.507 [2024-12-09T04:20:00.153Z] =================================================================================================================== 00:25:23.507 [2024-12-09T04:20:00.153Z] Total : 24006.56 93.78 0.00 0.00 5326.96 2607.19 16640.45 00:25:23.507 { 00:25:23.507 "results": [ 00:25:23.507 { 00:25:23.507 "job": "nvme0n1", 00:25:23.507 "core_mask": "0x2", 00:25:23.507 "workload": "randread", 00:25:23.507 "status": "finished", 00:25:23.507 "queue_depth": 128, 00:25:23.507 "io_size": 4096, 00:25:23.507 "runtime": 2.003119, 00:25:23.507 "iops": 24006.561766924482, 00:25:23.507 "mibps": 93.77563190204876, 00:25:23.507 "io_failed": 0, 00:25:23.507 "io_timeout": 0, 00:25:23.507 "avg_latency_us": 5326.959648759883, 00:25:23.507 "min_latency_us": 2607.1930434782607, 00:25:23.507 "max_latency_us": 16640.445217391305 00:25:23.507 } 00:25:23.507 ], 00:25:23.507 "core_count": 1 00:25:23.507 } 00:25:23.507 05:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:23.507 05:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:23.507 05:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:23.507 05:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:23.507 | .driver_specific 00:25:23.507 | .nvme_error 00:25:23.507 | .status_code 00:25:23.507 | .command_transient_transport_error' 00:25:23.765 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 188 > 0 )) 00:25:23.765 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3725947 00:25:23.765 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3725947 ']' 00:25:23.765 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3725947 00:25:23.765 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:23.765 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:23.765 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3725947 00:25:23.765 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:23.765 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:23.765 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3725947' 00:25:23.765 killing process with pid 3725947 00:25:23.765 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3725947 00:25:23.765 Received shutdown signal, test time was about 2.000000 seconds 00:25:23.765 00:25:23.765 Latency(us) 00:25:23.765 [2024-12-09T04:20:00.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.765 [2024-12-09T04:20:00.411Z] =================================================================================================================== 00:25:23.765 [2024-12-09T04:20:00.411Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:23.765 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3725947 00:25:24.023 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:24.023 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:24.023 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:24.023 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:24.023 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:24.023 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3726620 00:25:24.023 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3726620 /var/tmp/bperf.sock 00:25:24.023 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:24.023 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3726620 ']' 00:25:24.023 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:24.023 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:24.023 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:24.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:24.023 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:24.023 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:24.023 [2024-12-09 05:20:00.469937] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:25:24.023 [2024-12-09 05:20:00.469987] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3726620 ] 00:25:24.023 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:24.023 Zero copy mechanism will not be used. 00:25:24.023 [2024-12-09 05:20:00.534157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.023 [2024-12-09 05:20:00.577413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.023 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:24.023 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:24.280 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:24.280 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:24.280 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:24.280 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.280 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:24.280 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.280 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:24.280 05:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:24.845 nvme0n1 00:25:24.845 05:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:24.845 05:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.845 05:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:24.845 05:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.845 05:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:24.845 05:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:24.845 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:24.845 Zero copy mechanism will not be used. 00:25:24.845 Running I/O for 2 seconds... 00:25:24.845 [2024-12-09 05:20:01.364570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:24.845 [2024-12-09 05:20:01.364604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.845 [2024-12-09 05:20:01.364616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.845 [2024-12-09 05:20:01.372407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:24.845 [2024-12-09 05:20:01.372432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.845 [2024-12-09 05:20:01.372440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.845 [2024-12-09 05:20:01.379709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:24.845 [2024-12-09 05:20:01.379731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.845 [2024-12-09 05:20:01.379739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.845 [2024-12-09 05:20:01.386900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:24.845 [2024-12-09 05:20:01.386924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.845 [2024-12-09 05:20:01.386933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.845 [2024-12-09 05:20:01.393768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:24.845 [2024-12-09 05:20:01.393791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.845 [2024-12-09 05:20:01.393799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.845 [2024-12-09 05:20:01.400644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:24.845 [2024-12-09 05:20:01.400666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.845 [2024-12-09 05:20:01.400675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.845 [2024-12-09 05:20:01.407122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:24.845 [2024-12-09 05:20:01.407149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.845 [2024-12-09 05:20:01.407157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.845 [2024-12-09 05:20:01.410835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:24.845 [2024-12-09 05:20:01.410857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.845 [2024-12-09 05:20:01.410865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.845 [2024-12-09 05:20:01.417598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:24.845 [2024-12-09 05:20:01.417620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.845 [2024-12-09 05:20:01.417628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.845 [2024-12-09 05:20:01.424175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:24.845 [2024-12-09 05:20:01.424197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.845 [2024-12-09 05:20:01.424206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.846 [2024-12-09 05:20:01.431267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:24.846 [2024-12-09 05:20:01.431290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.846 [2024-12-09 05:20:01.431299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.846 [2024-12-09 05:20:01.438159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:24.846 [2024-12-09 05:20:01.438182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.846 [2024-12-09 05:20:01.438191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.846 [2024-12-09 05:20:01.444929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:24.846 [2024-12-09 05:20:01.444950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.846 [2024-12-09 05:20:01.444958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.846 [2024-12-09 05:20:01.452786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:24.846 [2024-12-09 05:20:01.452809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.846 [2024-12-09 05:20:01.452818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.846 [2024-12-09 05:20:01.460907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:24.846 [2024-12-09 05:20:01.460931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.846 [2024-12-09 05:20:01.460940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.846 [2024-12-09 05:20:01.468967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:24.846 [2024-12-09 05:20:01.468992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.846 [2024-12-09 05:20:01.469009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.846 [2024-12-09 05:20:01.475893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:24.846 [2024-12-09 05:20:01.475918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.846 [2024-12-09 05:20:01.475926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.846 [2024-12-09 05:20:01.484143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:24.846 [2024-12-09 05:20:01.484167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.846 [2024-12-09 05:20:01.484176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.492786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.492808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.492817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.500762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.500784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.500792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.507110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.507132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.507140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.514021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.514045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.514053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.520867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.520889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.520897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.527826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.527849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.527861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.534463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.534485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.534494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.541158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.541180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.541189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.548295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.548317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.548325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.555022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.555044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.555052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.561937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.561959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.561967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.568498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.568520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.568528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.574971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.574993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.575007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.581662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.581683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.581691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.588248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.588276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.588284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.594952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.594975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.594983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.601845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.601866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.601874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.608164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.608186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.608195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.612104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.612126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.612134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.618821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.618842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.618851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.625346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.625368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.625389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.632021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.632044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.632053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.638650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.638673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.638681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.645342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.645364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.645372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.103 [2024-12-09 05:20:01.651774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.103 [2024-12-09 05:20:01.651797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.103 [2024-12-09 05:20:01.651805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.104 [2024-12-09 05:20:01.658521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.104 [2024-12-09 05:20:01.658542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.104 [2024-12-09 05:20:01.658550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.104 [2024-12-09 05:20:01.665033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.104 [2024-12-09 05:20:01.665055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.104 [2024-12-09 05:20:01.665063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.104 [2024-12-09 05:20:01.672169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.104 [2024-12-09 05:20:01.672191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.104 [2024-12-09 05:20:01.672199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.104 [2024-12-09 05:20:01.679031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.104 [2024-12-09 05:20:01.679053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.104 [2024-12-09 05:20:01.679061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.104 [2024-12-09 05:20:01.685401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.104 [2024-12-09 05:20:01.685423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.104 [2024-12-09 05:20:01.685431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.104 [2024-12-09 05:20:01.691770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.104 [2024-12-09 05:20:01.691791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.104 [2024-12-09 05:20:01.691799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.104 [2024-12-09 05:20:01.698299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.104 [2024-12-09 05:20:01.698321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.104 [2024-12-09 05:20:01.698333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.104 [2024-12-09 05:20:01.704684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.104 [2024-12-09 05:20:01.704706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.104 [2024-12-09 05:20:01.704714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.104 [2024-12-09 05:20:01.710545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.104 [2024-12-09 05:20:01.710567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.104 [2024-12-09 05:20:01.710575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.104 [2024-12-09 05:20:01.716873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.104 [2024-12-09 05:20:01.716894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.104 [2024-12-09 05:20:01.716902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.104 [2024-12-09 05:20:01.722230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.104 [2024-12-09 05:20:01.722252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.104 [2024-12-09 05:20:01.722260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.104 [2024-12-09 05:20:01.727880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.104 [2024-12-09 05:20:01.727902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.104 [2024-12-09 05:20:01.727910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.104 [2024-12-09 05:20:01.734250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.104 [2024-12-09 05:20:01.734270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.104 [2024-12-09 05:20:01.734279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.104 [2024-12-09 05:20:01.737559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.104 [2024-12-09 05:20:01.737581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.104 [2024-12-09 05:20:01.737589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.104 [2024-12-09 05:20:01.744099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.104 [2024-12-09 05:20:01.744121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.104 [2024-12-09 05:20:01.744129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.361 [2024-12-09 05:20:01.750532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.361 [2024-12-09 05:20:01.750554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.361 [2024-12-09 05:20:01.750562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.361 [2024-12-09 05:20:01.756884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.361 [2024-12-09 05:20:01.756905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.361 [2024-12-09 05:20:01.756914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.361 [2024-12-09 05:20:01.763574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.361 [2024-12-09 05:20:01.763595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.361 [2024-12-09 05:20:01.763604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.361 [2024-12-09 05:20:01.769880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.361 [2024-12-09 05:20:01.769902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.361 [2024-12-09 05:20:01.769911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.361 [2024-12-09 05:20:01.776750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.361 [2024-12-09 05:20:01.776772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.361 [2024-12-09 05:20:01.776780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.361 [2024-12-09 05:20:01.783682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.361 [2024-12-09 05:20:01.783703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.361 [2024-12-09 05:20:01.783712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.361 [2024-12-09 05:20:01.790196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.361 [2024-12-09 05:20:01.790217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.361 [2024-12-09 05:20:01.790226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.361 [2024-12-09 05:20:01.796738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.361 [2024-12-09 05:20:01.796760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.361 [2024-12-09 05:20:01.796768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.361 [2024-12-09 05:20:01.803619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.361 [2024-12-09 05:20:01.803639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.361 [2024-12-09 05:20:01.803651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.361 [2024-12-09 05:20:01.810390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.361 [2024-12-09 05:20:01.810412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.361 [2024-12-09 05:20:01.810419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.361 [2024-12-09 05:20:01.817200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.361 [2024-12-09 05:20:01.817221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.361 [2024-12-09 05:20:01.817229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.361 [2024-12-09 05:20:01.824373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.361 [2024-12-09 05:20:01.824394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.361 [2024-12-09 05:20:01.824402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.361 [2024-12-09 05:20:01.830042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.830064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.830072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.836548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.836570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.836578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.843055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.843077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.843085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.849394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.849416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.849425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.856483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.856504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.856512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.862849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.862874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.862882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.869175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.869197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.869204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.876034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.876056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.876064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.882627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.882647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.882655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.889275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.889296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.889303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.896005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.896025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.896034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.902633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.902654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.902662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.909527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.909548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.909556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.916736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.916757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.916765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.923189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.923210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.923219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.930059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.930080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.930088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.936337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.936358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.936366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.942859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.942880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.942888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.949508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.949530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.949538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.955107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.955129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.955137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.961569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.961591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.961599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.967992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.968019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.968027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.974501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.974522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.974534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.980752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.980775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.980783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.987744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.987765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.987774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:01.994323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:01.994345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:01.994353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.362 [2024-12-09 05:20:02.001019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.362 [2024-12-09 05:20:02.001040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.362 [2024-12-09 05:20:02.001049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.007465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.007486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.622 [2024-12-09 05:20:02.007494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.013782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.013803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.622 [2024-12-09 05:20:02.013811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.020081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.020103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.622 [2024-12-09 05:20:02.020111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.026394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.026416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.622 [2024-12-09 05:20:02.026425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.030396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.030420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.622 [2024-12-09 05:20:02.030428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.035242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.035263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.622 [2024-12-09 05:20:02.035272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.041204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.041225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.622 [2024-12-09 05:20:02.041233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.046812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.046833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.622 [2024-12-09 05:20:02.046841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.052665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.052686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.622 [2024-12-09 05:20:02.052694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.058446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.058467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.622 [2024-12-09 05:20:02.058474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.064164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.064185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.622 [2024-12-09 05:20:02.064193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.069802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.069823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.622 [2024-12-09 05:20:02.069831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.075518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.075539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.622 [2024-12-09 05:20:02.075547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.081356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.081379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.622 [2024-12-09 05:20:02.081387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.087310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.087332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.622 [2024-12-09 05:20:02.087340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.093243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.093264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.622 [2024-12-09 05:20:02.093272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.099199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.099219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.622 [2024-12-09 05:20:02.099227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.105181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.105202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.622 [2024-12-09 05:20:02.105210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.110925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.110946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.622 [2024-12-09 05:20:02.110955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.116801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.116823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.622 [2024-12-09 05:20:02.116831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.122967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.122989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.622 [2024-12-09 05:20:02.123002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.622 [2024-12-09 05:20:02.128555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.622 [2024-12-09 05:20:02.128577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.128590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.623 [2024-12-09 05:20:02.134233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.623 [2024-12-09 05:20:02.134255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.134263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.623 [2024-12-09 05:20:02.141003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.623 [2024-12-09 05:20:02.141025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.141033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.623 [2024-12-09 05:20:02.146763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.623 [2024-12-09 05:20:02.146785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.146794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.623 [2024-12-09 05:20:02.153142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.623 [2024-12-09 05:20:02.153164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.153172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.623 [2024-12-09 05:20:02.159021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.623 [2024-12-09 05:20:02.159043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.159051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.623 [2024-12-09 05:20:02.165478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.623 [2024-12-09 05:20:02.165499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.165508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.623 [2024-12-09 05:20:02.173048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.623 [2024-12-09 05:20:02.173069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.173077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.623 [2024-12-09 05:20:02.180059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.623 [2024-12-09 05:20:02.180081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.180089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.623 [2024-12-09 05:20:02.187030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.623 [2024-12-09 05:20:02.187051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.187060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.623 [2024-12-09 05:20:02.194377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.623 [2024-12-09 05:20:02.194399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.194406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.623 [2024-12-09 05:20:02.201365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.623 [2024-12-09 05:20:02.201387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.201395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.623 [2024-12-09 05:20:02.208443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.623 [2024-12-09 05:20:02.208464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.208472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.623 [2024-12-09 05:20:02.215508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.623 [2024-12-09 05:20:02.215530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.215538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.623 [2024-12-09 05:20:02.222308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.623 [2024-12-09 05:20:02.222329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.222337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.623 [2024-12-09 05:20:02.228843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.623 [2024-12-09 05:20:02.228863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.228871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.623 [2024-12-09 05:20:02.234897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.623 [2024-12-09 05:20:02.234917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.234925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.623 [2024-12-09 05:20:02.240771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.623 [2024-12-09 05:20:02.240792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.240804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.623 [2024-12-09 05:20:02.246691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.623 [2024-12-09 05:20:02.246712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.246721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.623 [2024-12-09 05:20:02.252434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.623 [2024-12-09 05:20:02.252455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.252463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.623 [2024-12-09 05:20:02.257766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.623 [2024-12-09 05:20:02.257788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.257796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.623 [2024-12-09 05:20:02.262924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.623 [2024-12-09 05:20:02.262946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.623 [2024-12-09 05:20:02.262954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.882 [2024-12-09 05:20:02.269288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.882 [2024-12-09 05:20:02.269310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.882 [2024-12-09 05:20:02.269319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.882 [2024-12-09 05:20:02.275465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.882 [2024-12-09 05:20:02.275487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.882 [2024-12-09 05:20:02.275496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.282341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.282363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.282372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.290433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.290454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.290462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.298552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.298577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.298585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.302675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.302696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.302704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.310756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.310776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.310785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.318795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.318816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.318823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.326331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.326352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.326360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.333481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.333501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.333510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.340693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.340713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.340721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.347901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.347922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.347930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.883 4697.00 IOPS, 587.12 MiB/s [2024-12-09T04:20:02.529Z] [2024-12-09 05:20:02.356378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.356398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.356407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.363664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.363685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.363693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.371249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.371270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.371278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.378135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.378155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.378163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.385308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.385328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.385336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.392406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.392426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.392434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.399287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.399307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.399315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.406095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.406115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.406123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.412625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.412645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.412653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.419568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.419588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.419600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.425569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.425590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.425598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.432324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.432345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.432353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.439253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.439275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.439283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.446137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.446158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.446166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.452990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.453017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.453026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.459755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.883 [2024-12-09 05:20:02.459777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.883 [2024-12-09 05:20:02.459785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.883 [2024-12-09 05:20:02.467177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.884 [2024-12-09 05:20:02.467199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.884 [2024-12-09 05:20:02.467207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.884 [2024-12-09 05:20:02.474226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.884 [2024-12-09 05:20:02.474248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.884 [2024-12-09 05:20:02.474256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.884 [2024-12-09 05:20:02.482316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.884 [2024-12-09 05:20:02.482339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.884 [2024-12-09 05:20:02.482348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.884 [2024-12-09 05:20:02.490559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.884 [2024-12-09 05:20:02.490582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.884 [2024-12-09 05:20:02.490591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.884 [2024-12-09 05:20:02.498493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.884 [2024-12-09 05:20:02.498516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.884 [2024-12-09 05:20:02.498524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.884 [2024-12-09 05:20:02.505736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.884 [2024-12-09 05:20:02.505759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.884 [2024-12-09 05:20:02.505767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.884 [2024-12-09 05:20:02.513197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.884 [2024-12-09 05:20:02.513219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.884 [2024-12-09 05:20:02.513227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.884 [2024-12-09 05:20:02.520241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:25.884 [2024-12-09 05:20:02.520262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.884 [2024-12-09 05:20:02.520271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.143 [2024-12-09 05:20:02.527181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.143 [2024-12-09 05:20:02.527203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.143 [2024-12-09 05:20:02.527212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.143 [2024-12-09 05:20:02.534289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.143 [2024-12-09 05:20:02.534311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.143 [2024-12-09 05:20:02.534319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.143 [2024-12-09 05:20:02.540914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.143 [2024-12-09 05:20:02.540935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.143 [2024-12-09 05:20:02.540947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.143 [2024-12-09 05:20:02.546959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.143 [2024-12-09 05:20:02.546980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.143 [2024-12-09 05:20:02.546988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.143 [2024-12-09 05:20:02.553245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.143 [2024-12-09 05:20:02.553266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.143 [2024-12-09 05:20:02.553273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.143 [2024-12-09 05:20:02.559418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.143 [2024-12-09 05:20:02.559440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.143 [2024-12-09 05:20:02.559448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.143 [2024-12-09 05:20:02.565932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.143 [2024-12-09 05:20:02.565954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.143 [2024-12-09 05:20:02.565962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.143 [2024-12-09 05:20:02.571873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.143 [2024-12-09 05:20:02.571893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.143 [2024-12-09 05:20:02.571901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.143 [2024-12-09 05:20:02.578181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.143 [2024-12-09 05:20:02.578202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.143 [2024-12-09 05:20:02.578211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.143 [2024-12-09 05:20:02.583906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.143 [2024-12-09 05:20:02.583927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.143 [2024-12-09 05:20:02.583935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.143 [2024-12-09 05:20:02.589493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.143 [2024-12-09 05:20:02.589513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.143 [2024-12-09 05:20:02.589521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.143 [2024-12-09 05:20:02.595197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.143 [2024-12-09 05:20:02.595222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.143 [2024-12-09 05:20:02.595230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.143 [2024-12-09 05:20:02.601185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.601206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.601214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.607072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.607093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.607101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.613161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.613182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.613190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.618974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.618997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.619010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.624429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.624451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.624460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.629922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.629943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.629952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.636032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.636055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.636064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.642134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.642155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.642164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.648480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.648502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.648510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.656288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.656309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.656317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.663689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.663710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.663717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.670671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.670692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.670700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.677542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.677563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.677571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.683948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.683969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.683977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.690537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.690559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.690567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.696617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.696638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.696646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.702610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.702631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.702643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.708814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.708837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.708844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.714761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.714782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.714790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.720625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.720646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.720654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.726305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.726326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.726333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.732234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.732258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.732278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.738305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.738326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.738334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.744214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.744237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.744246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.750157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.750179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.750187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.756099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.756121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.756130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.144 [2024-12-09 05:20:02.762062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.144 [2024-12-09 05:20:02.762085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.144 [2024-12-09 05:20:02.762093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.145 [2024-12-09 05:20:02.767989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.145 [2024-12-09 05:20:02.768015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.145 [2024-12-09 05:20:02.768023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.145 [2024-12-09 05:20:02.774569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.145 [2024-12-09 05:20:02.774591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.145 [2024-12-09 05:20:02.774599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.145 [2024-12-09 05:20:02.780675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.145 [2024-12-09 05:20:02.780696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.145 [2024-12-09 05:20:02.780704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.786936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.404 [2024-12-09 05:20:02.786957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-12-09 05:20:02.786966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.793122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.404 [2024-12-09 05:20:02.793143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-12-09 05:20:02.793151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.799423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.404 [2024-12-09 05:20:02.799444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-12-09 05:20:02.799452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.805499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.404 [2024-12-09 05:20:02.805520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-12-09 05:20:02.805531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.811460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.404 [2024-12-09 05:20:02.811481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-12-09 05:20:02.811489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.817420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.404 [2024-12-09 05:20:02.817442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-12-09 05:20:02.817450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.823294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.404 [2024-12-09 05:20:02.823315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-12-09 05:20:02.823323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.829132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.404 [2024-12-09 05:20:02.829153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-12-09 05:20:02.829161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.834855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.404 [2024-12-09 05:20:02.834876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-12-09 05:20:02.834883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.840686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.404 [2024-12-09 05:20:02.840708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-12-09 05:20:02.840716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.846431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.404 [2024-12-09 05:20:02.846453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-12-09 05:20:02.846461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.852061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.404 [2024-12-09 05:20:02.852082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-12-09 05:20:02.852091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.857973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.404 [2024-12-09 05:20:02.858004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-12-09 05:20:02.858012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.863852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.404 [2024-12-09 05:20:02.863874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-12-09 05:20:02.863881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.869361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.404 [2024-12-09 05:20:02.869383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-12-09 05:20:02.869391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.875005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.404 [2024-12-09 05:20:02.875026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-12-09 05:20:02.875034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.880743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.404 [2024-12-09 05:20:02.880763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-12-09 05:20:02.880771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.886526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.404 [2024-12-09 05:20:02.886547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-12-09 05:20:02.886555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.892239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.404 [2024-12-09 05:20:02.892261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-12-09 05:20:02.892269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.897944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.404 [2024-12-09 05:20:02.897965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-12-09 05:20:02.897973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.903743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.404 [2024-12-09 05:20:02.903764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-12-09 05:20:02.903772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.404 [2024-12-09 05:20:02.909382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:02.909403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:02.909411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:02.915087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:02.915108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:02.915116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:02.920859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:02.920880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:02.920888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:02.926739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:02.926760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:02.926768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:02.932492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:02.932513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:02.932520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:02.938267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:02.938288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:02.938296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:02.943996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:02.944026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:02.944035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:02.950093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:02.950115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:02.950123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:02.956032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:02.956053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:02.956065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:02.961926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:02.961947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:02.961955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:02.967722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:02.967743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:02.967750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:02.973593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:02.973614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:02.973622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:02.979284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:02.979305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:02.979313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:02.984847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:02.984868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:02.984876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:02.990708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:02.990730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:02.990738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:02.996326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:02.996349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:02.996358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:03.002099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:03.002122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:03.002146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:03.007782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:03.007808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:03.007816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:03.013524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:03.013545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:03.013553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:03.019725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:03.019746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:03.019755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:03.026536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:03.026557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:03.026565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:03.034167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:03.034190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:03.034198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.405 [2024-12-09 05:20:03.041206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.405 [2024-12-09 05:20:03.041228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-12-09 05:20:03.041237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.664 [2024-12-09 05:20:03.048430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.664 [2024-12-09 05:20:03.048453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.664 [2024-12-09 05:20:03.048461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.664 [2024-12-09 05:20:03.056430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.664 [2024-12-09 05:20:03.056451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.664 [2024-12-09 05:20:03.056459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.664 [2024-12-09 05:20:03.065206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.664 [2024-12-09 05:20:03.065228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.664 [2024-12-09 05:20:03.065237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.664 [2024-12-09 05:20:03.073510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.664 [2024-12-09 05:20:03.073531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.664 [2024-12-09 05:20:03.073540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.664 [2024-12-09 05:20:03.081698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.664 [2024-12-09 05:20:03.081720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.664 [2024-12-09 05:20:03.081728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.664 [2024-12-09 05:20:03.090809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.664 [2024-12-09 05:20:03.090831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.664 [2024-12-09 05:20:03.090839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.664 [2024-12-09 05:20:03.099712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.664 [2024-12-09 05:20:03.099735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.664 [2024-12-09 05:20:03.099744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.664 [2024-12-09 05:20:03.107801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.664 [2024-12-09 05:20:03.107824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.664 [2024-12-09 05:20:03.107832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.115781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.115803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.115812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.123713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.123737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.123746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.132173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.132195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.132204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.140628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.140654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.140663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.149672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.149694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.149702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.158402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.158425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.158433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.166393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.166414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.166423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.173983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.174019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.174028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.180912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.180933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.180941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.188046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.188067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.188076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.195852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.195873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.195882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.203221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.203242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.203262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.210263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.210285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.210293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.216765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.216786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.216795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.223051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.223072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.223080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.229280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.229302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.229310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.235416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.235437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.235445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.241627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.241647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.241655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.247788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.247809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.247817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.253833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.253853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.253860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.259767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.259788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.259800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.265595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.265616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.265624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.271494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.271514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.271522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.277157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.277178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.277186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.282773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.282794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.282802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.288386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.288407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.665 [2024-12-09 05:20:03.288415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.665 [2024-12-09 05:20:03.294032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.665 [2024-12-09 05:20:03.294053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.666 [2024-12-09 05:20:03.294060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.666 [2024-12-09 05:20:03.299577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.666 [2024-12-09 05:20:03.299599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.666 [2024-12-09 05:20:03.299606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.666 [2024-12-09 05:20:03.305401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.666 [2024-12-09 05:20:03.305421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.666 [2024-12-09 05:20:03.305430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.925 [2024-12-09 05:20:03.311261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.925 [2024-12-09 05:20:03.311293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.925 [2024-12-09 05:20:03.311301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.925 [2024-12-09 05:20:03.316882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.925 [2024-12-09 05:20:03.316903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.925 [2024-12-09 05:20:03.316911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.925 [2024-12-09 05:20:03.322649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.925 [2024-12-09 05:20:03.322670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.925 [2024-12-09 05:20:03.322678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.925 [2024-12-09 05:20:03.328425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.925 [2024-12-09 05:20:03.328447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.925 [2024-12-09 05:20:03.328455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.925 [2024-12-09 05:20:03.334101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.926 [2024-12-09 05:20:03.334122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.926 [2024-12-09 05:20:03.334130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.926 [2024-12-09 05:20:03.339721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.926 [2024-12-09 05:20:03.339742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.926 [2024-12-09 05:20:03.339750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.926 [2024-12-09 05:20:03.345485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.926 [2024-12-09 05:20:03.345507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.926 [2024-12-09 05:20:03.345516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.926 [2024-12-09 05:20:03.351370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.926 [2024-12-09 05:20:03.351392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.926 [2024-12-09 05:20:03.351400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.926 4756.50 IOPS, 594.56 MiB/s [2024-12-09T04:20:03.572Z] [2024-12-09 05:20:03.358265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a81a0) 00:25:26.926 [2024-12-09 05:20:03.358286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.926 [2024-12-09 05:20:03.358294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.926 00:25:26.926 Latency(us) 00:25:26.926 [2024-12-09T04:20:03.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.926 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:26.926 nvme0n1 : 2.00 4758.86 594.86 0.00 0.00 3358.60 658.92 17210.32 00:25:26.926 [2024-12-09T04:20:03.572Z] =================================================================================================================== 00:25:26.926 [2024-12-09T04:20:03.572Z] Total : 4758.86 594.86 0.00 0.00 3358.60 658.92 17210.32 00:25:26.926 { 00:25:26.926 "results": [ 00:25:26.926 { 00:25:26.926 "job": "nvme0n1", 00:25:26.926 "core_mask": "0x2", 00:25:26.926 "workload": "randread", 00:25:26.926 "status": "finished", 00:25:26.926 "queue_depth": 16, 00:25:26.926 "io_size": 131072, 00:25:26.926 "runtime": 2.002371, 00:25:26.926 "iops": 4758.858373398336, 00:25:26.926 "mibps": 594.857296674792, 00:25:26.926 "io_failed": 0, 00:25:26.926 "io_timeout": 0, 00:25:26.926 "avg_latency_us": 3358.6020727573036, 00:25:26.926 "min_latency_us": 658.9217391304347, 00:25:26.926 "max_latency_us": 17210.32347826087 00:25:26.926 } 00:25:26.926 ], 00:25:26.926 "core_count": 1 00:25:26.926 } 00:25:26.926 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:26.926 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:26.926 | .driver_specific 00:25:26.926 | .nvme_error 00:25:26.926 | .status_code 00:25:26.926 | .command_transient_transport_error' 00:25:26.926 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:26.926 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:26.926 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 308 > 0 )) 00:25:26.926 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3726620 00:25:26.926 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3726620 ']' 00:25:26.926 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3726620 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3726620 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3726620' 00:25:27.186 killing process with pid 3726620 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3726620 00:25:27.186 Received shutdown signal, test time was about 2.000000 seconds 00:25:27.186 00:25:27.186 Latency(us) 00:25:27.186 [2024-12-09T04:20:03.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.186 [2024-12-09T04:20:03.832Z] =================================================================================================================== 00:25:27.186 [2024-12-09T04:20:03.832Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3726620 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3727100 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3727100 /var/tmp/bperf.sock 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3727100 ']' 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:27.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:27.186 05:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:27.445 [2024-12-09 05:20:03.865148] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:25:27.445 [2024-12-09 05:20:03.865200] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727100 ] 00:25:27.445 [2024-12-09 05:20:03.930581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.445 [2024-12-09 05:20:03.968164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.445 05:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:27.445 05:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:27.445 05:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:27.445 05:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:27.705 05:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:27.705 05:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.705 05:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:27.705 05:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.705 05:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:27.705 05:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:27.964 nvme0n1 00:25:27.964 05:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:27.964 05:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.964 05:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:27.964 05:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.964 05:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:27.964 05:20:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:28.224 Running I/O for 2 seconds... 00:25:28.224 [2024-12-09 05:20:04.683830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.224 [2024-12-09 05:20:04.683983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.224 [2024-12-09 05:20:04.684019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.224 [2024-12-09 05:20:04.693764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.224 [2024-12-09 05:20:04.693905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.224 [2024-12-09 05:20:04.693928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.224 [2024-12-09 05:20:04.703501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.224 [2024-12-09 05:20:04.703637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.224 [2024-12-09 05:20:04.703657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.224 [2024-12-09 05:20:04.713274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.224 [2024-12-09 05:20:04.713429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.224 [2024-12-09 05:20:04.713448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.224 [2024-12-09 05:20:04.723090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.224 [2024-12-09 05:20:04.723227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.224 [2024-12-09 05:20:04.723244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.224 [2024-12-09 05:20:04.732773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.224 [2024-12-09 05:20:04.732907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.224 [2024-12-09 05:20:04.732924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.224 [2024-12-09 05:20:04.742504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.224 [2024-12-09 05:20:04.742641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.224 [2024-12-09 05:20:04.742658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.224 [2024-12-09 05:20:04.752351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.224 [2024-12-09 05:20:04.752486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.224 [2024-12-09 05:20:04.752504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.224 [2024-12-09 05:20:04.762029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.224 [2024-12-09 05:20:04.762182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.224 [2024-12-09 05:20:04.762202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.224 [2024-12-09 05:20:04.771719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.224 [2024-12-09 05:20:04.771850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.224 [2024-12-09 05:20:04.771869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.224 [2024-12-09 05:20:04.781432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.224 [2024-12-09 05:20:04.781583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.224 [2024-12-09 05:20:04.781602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.224 [2024-12-09 05:20:04.791158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.224 [2024-12-09 05:20:04.791292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.224 [2024-12-09 05:20:04.791309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.224 [2024-12-09 05:20:04.800873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.224 [2024-12-09 05:20:04.801011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.224 [2024-12-09 05:20:04.801028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.224 [2024-12-09 05:20:04.810629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.224 [2024-12-09 05:20:04.810765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.224 [2024-12-09 05:20:04.810784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.224 [2024-12-09 05:20:04.820314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.224 [2024-12-09 05:20:04.820447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.224 [2024-12-09 05:20:04.820465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.224 [2024-12-09 05:20:04.829983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.224 [2024-12-09 05:20:04.830142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.224 [2024-12-09 05:20:04.830160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.224 [2024-12-09 05:20:04.839645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.224 [2024-12-09 05:20:04.839778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.224 [2024-12-09 05:20:04.839798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.224 [2024-12-09 05:20:04.849306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.224 [2024-12-09 05:20:04.849459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.224 [2024-12-09 05:20:04.849477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.224 [2024-12-09 05:20:04.859068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.224 [2024-12-09 05:20:04.859205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.224 [2024-12-09 05:20:04.859223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.484 [2024-12-09 05:20:04.868963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.484 [2024-12-09 05:20:04.869109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.484 [2024-12-09 05:20:04.869127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.484 [2024-12-09 05:20:04.878918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.484 [2024-12-09 05:20:04.879061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.484 [2024-12-09 05:20:04.879080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.484 [2024-12-09 05:20:04.888569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.484 [2024-12-09 05:20:04.888700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.484 [2024-12-09 05:20:04.888717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.484 [2024-12-09 05:20:04.898267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.484 [2024-12-09 05:20:04.898401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.484 [2024-12-09 05:20:04.898419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.484 [2024-12-09 05:20:04.907931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.484 [2024-12-09 05:20:04.908070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.484 [2024-12-09 05:20:04.908088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.484 [2024-12-09 05:20:04.917671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.484 [2024-12-09 05:20:04.917823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.484 [2024-12-09 05:20:04.917840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.484 [2024-12-09 05:20:04.927420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.484 [2024-12-09 05:20:04.927558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.484 [2024-12-09 05:20:04.927576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.484 [2024-12-09 05:20:04.937063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.484 [2024-12-09 05:20:04.937215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.484 [2024-12-09 05:20:04.937232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.484 [2024-12-09 05:20:04.947058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.484 [2024-12-09 05:20:04.947211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.484 [2024-12-09 05:20:04.947229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.484 [2024-12-09 05:20:04.956914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.484 [2024-12-09 05:20:04.957073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.484 [2024-12-09 05:20:04.957090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.484 [2024-12-09 05:20:04.966691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.484 [2024-12-09 05:20:04.966827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.484 [2024-12-09 05:20:04.966844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.484 [2024-12-09 05:20:04.976355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.484 [2024-12-09 05:20:04.976488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.484 [2024-12-09 05:20:04.976506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.484 [2024-12-09 05:20:04.986039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.484 [2024-12-09 05:20:04.986193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.485 [2024-12-09 05:20:04.986211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.485 [2024-12-09 05:20:04.995724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.485 [2024-12-09 05:20:04.995859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.485 [2024-12-09 05:20:04.995877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.485 [2024-12-09 05:20:05.005485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.485 [2024-12-09 05:20:05.005623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.485 [2024-12-09 05:20:05.005640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.485 [2024-12-09 05:20:05.015182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.485 [2024-12-09 05:20:05.015319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.485 [2024-12-09 05:20:05.015337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.485 [2024-12-09 05:20:05.024840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.485 [2024-12-09 05:20:05.024974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.485 [2024-12-09 05:20:05.024991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.485 [2024-12-09 05:20:05.034609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.485 [2024-12-09 05:20:05.034745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.485 [2024-12-09 05:20:05.034763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.485 [2024-12-09 05:20:05.044263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.485 [2024-12-09 05:20:05.044397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.485 [2024-12-09 05:20:05.044415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.485 [2024-12-09 05:20:05.054020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.485 [2024-12-09 05:20:05.054174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.485 [2024-12-09 05:20:05.054192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.485 [2024-12-09 05:20:05.063670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.485 [2024-12-09 05:20:05.063805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.485 [2024-12-09 05:20:05.063824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.485 [2024-12-09 05:20:05.073355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.485 [2024-12-09 05:20:05.073506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.485 [2024-12-09 05:20:05.073526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.485 [2024-12-09 05:20:05.083068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.485 [2024-12-09 05:20:05.083202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.485 [2024-12-09 05:20:05.083220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.485 [2024-12-09 05:20:05.092582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.485 [2024-12-09 05:20:05.092730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.485 [2024-12-09 05:20:05.092751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.485 [2024-12-09 05:20:05.102313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.485 [2024-12-09 05:20:05.102447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.485 [2024-12-09 05:20:05.102464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.485 [2024-12-09 05:20:05.112005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.485 [2024-12-09 05:20:05.112139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.485 [2024-12-09 05:20:05.112157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.485 [2024-12-09 05:20:05.121683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.485 [2024-12-09 05:20:05.121836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.485 [2024-12-09 05:20:05.121854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.743 [2024-12-09 05:20:05.131667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.743 [2024-12-09 05:20:05.131806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.743 [2024-12-09 05:20:05.131823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.743 [2024-12-09 05:20:05.141406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.743 [2024-12-09 05:20:05.141557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.743 [2024-12-09 05:20:05.141575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.743 [2024-12-09 05:20:05.151149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.743 [2024-12-09 05:20:05.151283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.743 [2024-12-09 05:20:05.151300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.160803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.160955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.160973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.170674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.170808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.170828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.180461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.180613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.180632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.190192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.190352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.190370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.200098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.200251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.200269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.209937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.210080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.210099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.219771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.219923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.219940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.229512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.229665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.229682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.239320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.239455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.239472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.248955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.249111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.249129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.258775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.258908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.258929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.268410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.268544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.268562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.278131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.278288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.278305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.287806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.287942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.287960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.297513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.297663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.297681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.307238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.307371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.307390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.316905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.317053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.317071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.326623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.326755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.326773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.336291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.336425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.336443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.346047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.346204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.346222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.355726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.355864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.355882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.365436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.365585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.365604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.375170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.375306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.375324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.744 [2024-12-09 05:20:05.384888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:28.744 [2024-12-09 05:20:05.385024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.744 [2024-12-09 05:20:05.385041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.394851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.395006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.395024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.404631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.404765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.404783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.414356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.414490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.414508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.424026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.424158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.424176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.433698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.433850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.433868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.443410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.443541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.443575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.453372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.453510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.453527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.463214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.463367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.463385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.473058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.473211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.473229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.482761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.482895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.482913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.492437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.492573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.492590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.502151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.502312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.502331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.512042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.512192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.512214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.521758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.521912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.521930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.531584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.531732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.531749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.541463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.541601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.541619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.551388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.551521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.551538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.561180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.561335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.561354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.570961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.571122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.571141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.580713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.580847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.580865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.590438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.590574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.590592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.003 [2024-12-09 05:20:05.600275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.003 [2024-12-09 05:20:05.600435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.003 [2024-12-09 05:20:05.600455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.004 [2024-12-09 05:20:05.609996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.004 [2024-12-09 05:20:05.610137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.004 [2024-12-09 05:20:05.610155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.004 [2024-12-09 05:20:05.619737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.004 [2024-12-09 05:20:05.619874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.004 [2024-12-09 05:20:05.619892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.004 [2024-12-09 05:20:05.629665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.004 [2024-12-09 05:20:05.629799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.004 [2024-12-09 05:20:05.629817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.004 [2024-12-09 05:20:05.639331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.004 [2024-12-09 05:20:05.639466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.004 [2024-12-09 05:20:05.639484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.649277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.649415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.649433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.659133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.659286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.659304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.668818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.668954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.668972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 26055.00 IOPS, 101.78 MiB/s [2024-12-09T04:20:05.909Z] [2024-12-09 05:20:05.678558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.678712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.678731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.688276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.688410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.688428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.698105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.698239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.698274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.708106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.708247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.708265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.717912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.718070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.718089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.727733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.727868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.727885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.737412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.737545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.737563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.747062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.747214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.747232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.756843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.756995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.757021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.766694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.766831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.766851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.776385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.776518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.776536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.786127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.786278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.786297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.795826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.795981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.796006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.805603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.805737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.805755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.815329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.815481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.815499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.825045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.825197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.825215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.834752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.834886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.834905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.844416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.844547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.844564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.854171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.854310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.854327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.863928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.864086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.864106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.873643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.873776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.263 [2024-12-09 05:20:05.873795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.263 [2024-12-09 05:20:05.883482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.263 [2024-12-09 05:20:05.883618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.264 [2024-12-09 05:20:05.883637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.264 [2024-12-09 05:20:05.893218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.264 [2024-12-09 05:20:05.893352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.264 [2024-12-09 05:20:05.893369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.264 [2024-12-09 05:20:05.902927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.264 [2024-12-09 05:20:05.903073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.264 [2024-12-09 05:20:05.903090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.523 [2024-12-09 05:20:05.912882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.523 [2024-12-09 05:20:05.913050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.523 [2024-12-09 05:20:05.913068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.523 [2024-12-09 05:20:05.922646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.523 [2024-12-09 05:20:05.922785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.523 [2024-12-09 05:20:05.922802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.523 [2024-12-09 05:20:05.932352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.523 [2024-12-09 05:20:05.932488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.523 [2024-12-09 05:20:05.932505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.523 [2024-12-09 05:20:05.942090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.523 [2024-12-09 05:20:05.942225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.523 [2024-12-09 05:20:05.942243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.523 [2024-12-09 05:20:05.951734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.523 [2024-12-09 05:20:05.951878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.523 [2024-12-09 05:20:05.951896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.523 [2024-12-09 05:20:05.961718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.523 [2024-12-09 05:20:05.961854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.523 [2024-12-09 05:20:05.961871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.523 [2024-12-09 05:20:05.971581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.523 [2024-12-09 05:20:05.971732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.523 [2024-12-09 05:20:05.971750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.523 [2024-12-09 05:20:05.981399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.523 [2024-12-09 05:20:05.981550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.523 [2024-12-09 05:20:05.981568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.523 [2024-12-09 05:20:05.991121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.523 [2024-12-09 05:20:05.991257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.523 [2024-12-09 05:20:05.991275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.523 [2024-12-09 05:20:06.000830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.523 [2024-12-09 05:20:06.000964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.523 [2024-12-09 05:20:06.000982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.523 [2024-12-09 05:20:06.010572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.523 [2024-12-09 05:20:06.010707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.523 [2024-12-09 05:20:06.010726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.523 [2024-12-09 05:20:06.020252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.523 [2024-12-09 05:20:06.020385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.523 [2024-12-09 05:20:06.020407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.523 [2024-12-09 05:20:06.029945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.523 [2024-12-09 05:20:06.030105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.523 [2024-12-09 05:20:06.030123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.523 [2024-12-09 05:20:06.039625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.523 [2024-12-09 05:20:06.039759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.523 [2024-12-09 05:20:06.039776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.523 [2024-12-09 05:20:06.049334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.523 [2024-12-09 05:20:06.049485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.523 [2024-12-09 05:20:06.049503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.523 [2024-12-09 05:20:06.059010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.523 [2024-12-09 05:20:06.059147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.523 [2024-12-09 05:20:06.059165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.523 [2024-12-09 05:20:06.068700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.524 [2024-12-09 05:20:06.068833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.524 [2024-12-09 05:20:06.068852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.524 [2024-12-09 05:20:06.078390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.524 [2024-12-09 05:20:06.078526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.524 [2024-12-09 05:20:06.078544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.524 [2024-12-09 05:20:06.088050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.524 [2024-12-09 05:20:06.088184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.524 [2024-12-09 05:20:06.088201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.524 [2024-12-09 05:20:06.097757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.524 [2024-12-09 05:20:06.097908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.524 [2024-12-09 05:20:06.097926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.524 [2024-12-09 05:20:06.107444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.524 [2024-12-09 05:20:06.107599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.524 [2024-12-09 05:20:06.107617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.524 [2024-12-09 05:20:06.117140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.524 [2024-12-09 05:20:06.117291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.524 [2024-12-09 05:20:06.117309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.524 [2024-12-09 05:20:06.126846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.524 [2024-12-09 05:20:06.126981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.524 [2024-12-09 05:20:06.127003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.524 [2024-12-09 05:20:06.136491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.524 [2024-12-09 05:20:06.136624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.524 [2024-12-09 05:20:06.136641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.524 [2024-12-09 05:20:06.146218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.524 [2024-12-09 05:20:06.146353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.524 [2024-12-09 05:20:06.146370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.524 [2024-12-09 05:20:06.155859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.524 [2024-12-09 05:20:06.155990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.524 [2024-12-09 05:20:06.156011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.524 [2024-12-09 05:20:06.165677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.524 [2024-12-09 05:20:06.165814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.524 [2024-12-09 05:20:06.165832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.784 [2024-12-09 05:20:06.175582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.784 [2024-12-09 05:20:06.175716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.784 [2024-12-09 05:20:06.175734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.784 [2024-12-09 05:20:06.185253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.784 [2024-12-09 05:20:06.185405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.784 [2024-12-09 05:20:06.185423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.784 [2024-12-09 05:20:06.195034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.784 [2024-12-09 05:20:06.195186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.784 [2024-12-09 05:20:06.195204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.784 [2024-12-09 05:20:06.205077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.784 [2024-12-09 05:20:06.205215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.784 [2024-12-09 05:20:06.205233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.784 [2024-12-09 05:20:06.215011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.784 [2024-12-09 05:20:06.215147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.784 [2024-12-09 05:20:06.215166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.784 [2024-12-09 05:20:06.224808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.784 [2024-12-09 05:20:06.224940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.784 [2024-12-09 05:20:06.224957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.784 [2024-12-09 05:20:06.234526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.784 [2024-12-09 05:20:06.234660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.784 [2024-12-09 05:20:06.234678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.784 [2024-12-09 05:20:06.244170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.784 [2024-12-09 05:20:06.244303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.784 [2024-12-09 05:20:06.244320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.784 [2024-12-09 05:20:06.253902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.784 [2024-12-09 05:20:06.254062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.784 [2024-12-09 05:20:06.254080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.784 [2024-12-09 05:20:06.263637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.785 [2024-12-09 05:20:06.263772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.785 [2024-12-09 05:20:06.263790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.785 [2024-12-09 05:20:06.273332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.785 [2024-12-09 05:20:06.273485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.785 [2024-12-09 05:20:06.273506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.785 [2024-12-09 05:20:06.283060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.785 [2024-12-09 05:20:06.283211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.785 [2024-12-09 05:20:06.283230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.785 [2024-12-09 05:20:06.292756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.785 [2024-12-09 05:20:06.292892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.785 [2024-12-09 05:20:06.292909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.785 [2024-12-09 05:20:06.302409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.785 [2024-12-09 05:20:06.302542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.785 [2024-12-09 05:20:06.302559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.785 [2024-12-09 05:20:06.312171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.785 [2024-12-09 05:20:06.312335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.785 [2024-12-09 05:20:06.312353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.785 [2024-12-09 05:20:06.321850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.785 [2024-12-09 05:20:06.321985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.785 [2024-12-09 05:20:06.322007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.785 [2024-12-09 05:20:06.331542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.785 [2024-12-09 05:20:06.331694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.785 [2024-12-09 05:20:06.331712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.785 [2024-12-09 05:20:06.341235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.785 [2024-12-09 05:20:06.341367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.785 [2024-12-09 05:20:06.341384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.785 [2024-12-09 05:20:06.350876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.785 [2024-12-09 05:20:06.351013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.785 [2024-12-09 05:20:06.351031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.785 [2024-12-09 05:20:06.360610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.785 [2024-12-09 05:20:06.360742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.785 [2024-12-09 05:20:06.360760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.785 [2024-12-09 05:20:06.370235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.785 [2024-12-09 05:20:06.370368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.785 [2024-12-09 05:20:06.370386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.785 [2024-12-09 05:20:06.379936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.785 [2024-12-09 05:20:06.380097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.785 [2024-12-09 05:20:06.380116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.785 [2024-12-09 05:20:06.389684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.785 [2024-12-09 05:20:06.389816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.785 [2024-12-09 05:20:06.389833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.785 [2024-12-09 05:20:06.399412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.785 [2024-12-09 05:20:06.399562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.785 [2024-12-09 05:20:06.399580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.785 [2024-12-09 05:20:06.409171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.785 [2024-12-09 05:20:06.409306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.785 [2024-12-09 05:20:06.409324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:29.785 [2024-12-09 05:20:06.418820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:29.785 [2024-12-09 05:20:06.418953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.785 [2024-12-09 05:20:06.418970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.046 [2024-12-09 05:20:06.428752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.046 [2024-12-09 05:20:06.428886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.046 [2024-12-09 05:20:06.428904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.046 [2024-12-09 05:20:06.438595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.046 [2024-12-09 05:20:06.438728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.046 [2024-12-09 05:20:06.438749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.046 [2024-12-09 05:20:06.448295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.046 [2024-12-09 05:20:06.448429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.046 [2024-12-09 05:20:06.448447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.046 [2024-12-09 05:20:06.458003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.046 [2024-12-09 05:20:06.458139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.046 [2024-12-09 05:20:06.458172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.046 [2024-12-09 05:20:06.467946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.046 [2024-12-09 05:20:06.468090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.046 [2024-12-09 05:20:06.468109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.046 [2024-12-09 05:20:06.477699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.046 [2024-12-09 05:20:06.477833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.046 [2024-12-09 05:20:06.477851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.046 [2024-12-09 05:20:06.487420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.046 [2024-12-09 05:20:06.487574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.046 [2024-12-09 05:20:06.487591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.046 [2024-12-09 05:20:06.497078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.046 [2024-12-09 05:20:06.497213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.046 [2024-12-09 05:20:06.497231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.046 [2024-12-09 05:20:06.506745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.046 [2024-12-09 05:20:06.506899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.046 [2024-12-09 05:20:06.506917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.046 [2024-12-09 05:20:06.516653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.046 [2024-12-09 05:20:06.516790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.046 [2024-12-09 05:20:06.516808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.046 [2024-12-09 05:20:06.526317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.046 [2024-12-09 05:20:06.526456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.046 [2024-12-09 05:20:06.526474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.046 [2024-12-09 05:20:06.536017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.046 [2024-12-09 05:20:06.536171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.046 [2024-12-09 05:20:06.536189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.046 [2024-12-09 05:20:06.545712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.046 [2024-12-09 05:20:06.545846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.046 [2024-12-09 05:20:06.545863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.046 [2024-12-09 05:20:06.555430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.046 [2024-12-09 05:20:06.555583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.046 [2024-12-09 05:20:06.555601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.046 [2024-12-09 05:20:06.565122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.046 [2024-12-09 05:20:06.565260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.046 [2024-12-09 05:20:06.565278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.046 [2024-12-09 05:20:06.574755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.046 [2024-12-09 05:20:06.574888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.046 [2024-12-09 05:20:06.574906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.046 [2024-12-09 05:20:06.584477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.046 [2024-12-09 05:20:06.584615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.046 [2024-12-09 05:20:06.584633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.046 [2024-12-09 05:20:06.594187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.046 [2024-12-09 05:20:06.594342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.046 [2024-12-09 05:20:06.594360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.046 [2024-12-09 05:20:06.603927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.047 [2024-12-09 05:20:06.604071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.047 [2024-12-09 05:20:06.604090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.047 [2024-12-09 05:20:06.613765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.047 [2024-12-09 05:20:06.613917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.047 [2024-12-09 05:20:06.613935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.047 [2024-12-09 05:20:06.623732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.047 [2024-12-09 05:20:06.623869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.047 [2024-12-09 05:20:06.623886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.047 [2024-12-09 05:20:06.633548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.047 [2024-12-09 05:20:06.633683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.047 [2024-12-09 05:20:06.633701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.047 [2024-12-09 05:20:06.643355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.047 [2024-12-09 05:20:06.643490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.047 [2024-12-09 05:20:06.643508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.047 [2024-12-09 05:20:06.653056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.047 [2024-12-09 05:20:06.653194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.047 [2024-12-09 05:20:06.653211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.047 [2024-12-09 05:20:06.662751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.047 [2024-12-09 05:20:06.662887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.047 [2024-12-09 05:20:06.662905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.047 [2024-12-09 05:20:06.672480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d180) with pdu=0x200016efeb58 00:25:30.047 [2024-12-09 05:20:06.672632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.047 [2024-12-09 05:20:06.672650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.047 26144.00 IOPS, 102.12 MiB/s 00:25:30.047 Latency(us) 00:25:30.047 [2024-12-09T04:20:06.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.047 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:30.047 nvme0n1 : 2.01 26148.95 102.14 0.00 0.00 4886.47 3504.75 10770.70 00:25:30.047 [2024-12-09T04:20:06.693Z] =================================================================================================================== 00:25:30.047 [2024-12-09T04:20:06.693Z] Total : 26148.95 102.14 0.00 0.00 4886.47 3504.75 10770.70 00:25:30.047 { 00:25:30.047 "results": [ 00:25:30.047 { 00:25:30.047 "job": "nvme0n1", 00:25:30.047 "core_mask": "0x2", 00:25:30.047 "workload": "randwrite", 00:25:30.047 "status": "finished", 00:25:30.047 "queue_depth": 128, 00:25:30.047 "io_size": 4096, 00:25:30.047 "runtime": 2.006046, 00:25:30.047 "iops": 26148.9517189536, 00:25:30.047 "mibps": 102.1443426521625, 00:25:30.047 "io_failed": 0, 00:25:30.047 "io_timeout": 0, 00:25:30.047 "avg_latency_us": 4886.47441367009, 00:25:30.047 "min_latency_us": 3504.751304347826, 00:25:30.047 "max_latency_us": 10770.699130434783 00:25:30.047 } 00:25:30.047 ], 00:25:30.047 "core_count": 1 00:25:30.047 } 00:25:30.306 05:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:30.306 05:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:30.306 05:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:30.306 | .driver_specific 00:25:30.306 | .nvme_error 00:25:30.306 | .status_code 00:25:30.306 | .command_transient_transport_error' 00:25:30.306 05:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:30.306 05:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 205 > 0 )) 00:25:30.306 05:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3727100 00:25:30.306 05:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3727100 ']' 00:25:30.306 05:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3727100 00:25:30.306 05:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:30.306 05:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:30.306 05:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3727100 00:25:30.306 05:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:30.306 05:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:30.306 05:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3727100' 00:25:30.306 killing process with pid 3727100 00:25:30.306 05:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3727100 00:25:30.306 Received shutdown signal, test time was about 2.000000 seconds 00:25:30.306 00:25:30.306 Latency(us) 00:25:30.306 [2024-12-09T04:20:06.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.307 [2024-12-09T04:20:06.953Z] =================================================================================================================== 00:25:30.307 [2024-12-09T04:20:06.953Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:30.307 05:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3727100 00:25:30.566 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:30.566 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:30.566 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:30.566 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:30.566 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:30.566 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3727634 00:25:30.566 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3727634 /var/tmp/bperf.sock 00:25:30.566 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:30.566 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3727634 ']' 00:25:30.566 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:30.566 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.566 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:30.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:30.566 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.566 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:30.566 [2024-12-09 05:20:07.186107] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:25:30.566 [2024-12-09 05:20:07.186158] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727634 ] 00:25:30.566 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:30.566 Zero copy mechanism will not be used. 00:25:30.826 [2024-12-09 05:20:07.251047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.826 [2024-12-09 05:20:07.290453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.826 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:30.826 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:30.826 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:30.826 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:31.085 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:31.085 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.085 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:31.085 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.085 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:31.085 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:31.344 nvme0n1 00:25:31.604 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:31.604 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.604 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:31.604 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.604 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:31.604 05:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:31.604 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:31.604 Zero copy mechanism will not be used. 00:25:31.604 Running I/O for 2 seconds... 00:25:31.604 [2024-12-09 05:20:08.087906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.604 [2024-12-09 05:20:08.088046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.604 [2024-12-09 05:20:08.088080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.604 [2024-12-09 05:20:08.094054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.604 [2024-12-09 05:20:08.094155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.604 [2024-12-09 05:20:08.094180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.604 [2024-12-09 05:20:08.099928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.604 [2024-12-09 05:20:08.100014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.100036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.106458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.106531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.106551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.112920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.112989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.113016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.119222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.119295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.119314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.125640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.125739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.125758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.131835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.131905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.131923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.138158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.138230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.138259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.144622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.144704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.144723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.151082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.151189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.151207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.156769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.156837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.156855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.163872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.164004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.164024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.171815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.172103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.172123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.179653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.179978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.180003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.187761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.188139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.188158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.195508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.195816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.195835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.203255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.203554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.203574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.211537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.211709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.211728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.219306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.219652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.219671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.226470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.226744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.226763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.232081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.232352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.232371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.236952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.237253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.237272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.241561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.241855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.241874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.605 [2024-12-09 05:20:08.246504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.605 [2024-12-09 05:20:08.246782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.605 [2024-12-09 05:20:08.246801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.866 [2024-12-09 05:20:08.251230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.866 [2024-12-09 05:20:08.251515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-12-09 05:20:08.251534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.866 [2024-12-09 05:20:08.255726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.866 [2024-12-09 05:20:08.256011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-12-09 05:20:08.256052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.866 [2024-12-09 05:20:08.260206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.866 [2024-12-09 05:20:08.260493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-12-09 05:20:08.260513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.866 [2024-12-09 05:20:08.264649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.866 [2024-12-09 05:20:08.264925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-12-09 05:20:08.264945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.866 [2024-12-09 05:20:08.269107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.866 [2024-12-09 05:20:08.269406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-12-09 05:20:08.269425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.866 [2024-12-09 05:20:08.273530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.866 [2024-12-09 05:20:08.273819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-12-09 05:20:08.273839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.866 [2024-12-09 05:20:08.277948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.866 [2024-12-09 05:20:08.278244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-12-09 05:20:08.278264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.866 [2024-12-09 05:20:08.282334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.866 [2024-12-09 05:20:08.282627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-12-09 05:20:08.282646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.866 [2024-12-09 05:20:08.286716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.866 [2024-12-09 05:20:08.287019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-12-09 05:20:08.287039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.866 [2024-12-09 05:20:08.291633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.866 [2024-12-09 05:20:08.291913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-12-09 05:20:08.291932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.866 [2024-12-09 05:20:08.296479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.866 [2024-12-09 05:20:08.296788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-12-09 05:20:08.296807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.866 [2024-12-09 05:20:08.302051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.866 [2024-12-09 05:20:08.302437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-12-09 05:20:08.302457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.866 [2024-12-09 05:20:08.308545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.866 [2024-12-09 05:20:08.308873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-12-09 05:20:08.308892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.866 [2024-12-09 05:20:08.315865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.866 [2024-12-09 05:20:08.316256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-12-09 05:20:08.316276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.866 [2024-12-09 05:20:08.323780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.866 [2024-12-09 05:20:08.324160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.866 [2024-12-09 05:20:08.324179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.866 [2024-12-09 05:20:08.332570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.332915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.332933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.341048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.341441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.341460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.349232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.349596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.349616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.357358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.357694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.357714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.365842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.366211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.366231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.374530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.374922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.374942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.383101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.383502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.383522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.391026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.391387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.391407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.398660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.398964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.398983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.406637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.406990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.407015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.414742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.415123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.415143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.422862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.423209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.423229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.430586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.430903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.430933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.438466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.438774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.438793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.445871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.446203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.446222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.453321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.453698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.453718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.461029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.461442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.461462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.468491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.468849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.468869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.476030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.476355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.476375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.483127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.483523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.483542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.489258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.489534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.489553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.494572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.494860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.494880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.500278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.500553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.500573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.867 [2024-12-09 05:20:08.505323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:31.867 [2024-12-09 05:20:08.505604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.867 [2024-12-09 05:20:08.505624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.128 [2024-12-09 05:20:08.510153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.128 [2024-12-09 05:20:08.510444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.128 [2024-12-09 05:20:08.510465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.128 [2024-12-09 05:20:08.514853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.128 [2024-12-09 05:20:08.515147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.128 [2024-12-09 05:20:08.515166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.128 [2024-12-09 05:20:08.519682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.128 [2024-12-09 05:20:08.519932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.128 [2024-12-09 05:20:08.519951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.128 [2024-12-09 05:20:08.524467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.128 [2024-12-09 05:20:08.524732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.128 [2024-12-09 05:20:08.524751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.128 [2024-12-09 05:20:08.528938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.128 [2024-12-09 05:20:08.529211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.128 [2024-12-09 05:20:08.529230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.128 [2024-12-09 05:20:08.533368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.128 [2024-12-09 05:20:08.533630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.128 [2024-12-09 05:20:08.533649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.128 [2024-12-09 05:20:08.537780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.128 [2024-12-09 05:20:08.538062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.128 [2024-12-09 05:20:08.538081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.128 [2024-12-09 05:20:08.542194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.128 [2024-12-09 05:20:08.542455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.128 [2024-12-09 05:20:08.542475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.128 [2024-12-09 05:20:08.546567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.128 [2024-12-09 05:20:08.546840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.128 [2024-12-09 05:20:08.546859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.128 [2024-12-09 05:20:08.550965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.128 [2024-12-09 05:20:08.551240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.128 [2024-12-09 05:20:08.551260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.128 [2024-12-09 05:20:08.555374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.128 [2024-12-09 05:20:08.555643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.128 [2024-12-09 05:20:08.555661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.128 [2024-12-09 05:20:08.559823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.128 [2024-12-09 05:20:08.560118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.128 [2024-12-09 05:20:08.560138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.128 [2024-12-09 05:20:08.564270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.128 [2024-12-09 05:20:08.564554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.128 [2024-12-09 05:20:08.564574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.128 [2024-12-09 05:20:08.568724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.128 [2024-12-09 05:20:08.569015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.128 [2024-12-09 05:20:08.569036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.128 [2024-12-09 05:20:08.573151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.128 [2024-12-09 05:20:08.573436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.128 [2024-12-09 05:20:08.573459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.128 [2024-12-09 05:20:08.577574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.128 [2024-12-09 05:20:08.577841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.128 [2024-12-09 05:20:08.577861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.128 [2024-12-09 05:20:08.581962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.128 [2024-12-09 05:20:08.582235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.128 [2024-12-09 05:20:08.582255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.128 [2024-12-09 05:20:08.586283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.128 [2024-12-09 05:20:08.586539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.128 [2024-12-09 05:20:08.586558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.128 [2024-12-09 05:20:08.590597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.128 [2024-12-09 05:20:08.590855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.590873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.594921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.595229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.595249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.599323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.599593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.599613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.603704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.603979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.604005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.608132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.608398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.608417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.612550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.612818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.612838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.616976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.617270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.617290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.621480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.621754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.621774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.625923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.626201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.626221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.630334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.630603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.630622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.634708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.634974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.634993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.639112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.639374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.639393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.643499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.643774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.643793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.648262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.648506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.648524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.653711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.653989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.654017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.659266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.659531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.659551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.665420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.665696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.665715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.671070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.671345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.671365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.676881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.677163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.677183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.683079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.683306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.683326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.689252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.689532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.689551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.694916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.695195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.695215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.701055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.701329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.701352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.706770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.707058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.707077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.712873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.713158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.713178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.718749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.719041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.719061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.724735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.725009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.129 [2024-12-09 05:20:08.725029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.129 [2024-12-09 05:20:08.730370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.129 [2024-12-09 05:20:08.730644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.130 [2024-12-09 05:20:08.730663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.130 [2024-12-09 05:20:08.736094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.130 [2024-12-09 05:20:08.736354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.130 [2024-12-09 05:20:08.736374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.130 [2024-12-09 05:20:08.742039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.130 [2024-12-09 05:20:08.742315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.130 [2024-12-09 05:20:08.742334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.130 [2024-12-09 05:20:08.747874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.130 [2024-12-09 05:20:08.748115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.130 [2024-12-09 05:20:08.748134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.130 [2024-12-09 05:20:08.753722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.130 [2024-12-09 05:20:08.754007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.130 [2024-12-09 05:20:08.754027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.130 [2024-12-09 05:20:08.759241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.130 [2024-12-09 05:20:08.759506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.130 [2024-12-09 05:20:08.759525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.130 [2024-12-09 05:20:08.764664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.130 [2024-12-09 05:20:08.764922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.130 [2024-12-09 05:20:08.764942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.130 [2024-12-09 05:20:08.770247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.130 [2024-12-09 05:20:08.770497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.130 [2024-12-09 05:20:08.770517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.390 [2024-12-09 05:20:08.776250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.390 [2024-12-09 05:20:08.776518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.390 [2024-12-09 05:20:08.776538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.390 [2024-12-09 05:20:08.781940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.390 [2024-12-09 05:20:08.782216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.390 [2024-12-09 05:20:08.782235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.390 [2024-12-09 05:20:08.788580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.390 [2024-12-09 05:20:08.788896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.390 [2024-12-09 05:20:08.788916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.390 [2024-12-09 05:20:08.794518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.390 [2024-12-09 05:20:08.794797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.390 [2024-12-09 05:20:08.794817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.390 [2024-12-09 05:20:08.800382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.390 [2024-12-09 05:20:08.800626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.390 [2024-12-09 05:20:08.800645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.390 [2024-12-09 05:20:08.805827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.390 [2024-12-09 05:20:08.806102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.390 [2024-12-09 05:20:08.806121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.390 [2024-12-09 05:20:08.811993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.390 [2024-12-09 05:20:08.812284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.390 [2024-12-09 05:20:08.812304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.390 [2024-12-09 05:20:08.817060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.390 [2024-12-09 05:20:08.817342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.390 [2024-12-09 05:20:08.817361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.390 [2024-12-09 05:20:08.822466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.390 [2024-12-09 05:20:08.822717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.390 [2024-12-09 05:20:08.822736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.390 [2024-12-09 05:20:08.828045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.828318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.828338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.833275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.833539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.833557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.838225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.838481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.838500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.842843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.843111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.843130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.847801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.848067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.848090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.853226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.853491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.853511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.858279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.858544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.858563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.863366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.863633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.863654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.868248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.868514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.868534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.873098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.873362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.873382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.878180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.878448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.878468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.882976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.883250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.883269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.887987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.888255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.888274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.893234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.893494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.893514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.898237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.898500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.898519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.903046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.903339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.903359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.907788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.908059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.908077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.913037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.913298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.913318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.918161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.918425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.918444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.923278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.923627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.923662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.928538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.928772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.928794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.933212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.933445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.933465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.937555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.937786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.937805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.942056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.942286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.942305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.946371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.946605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.946624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.950914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.951154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.951173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.955403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.391 [2024-12-09 05:20:08.955632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.391 [2024-12-09 05:20:08.955651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.391 [2024-12-09 05:20:08.959639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.392 [2024-12-09 05:20:08.959883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.392 [2024-12-09 05:20:08.959902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.392 [2024-12-09 05:20:08.963873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.392 [2024-12-09 05:20:08.964132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.392 [2024-12-09 05:20:08.964152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.392 [2024-12-09 05:20:08.968443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.392 [2024-12-09 05:20:08.968660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.392 [2024-12-09 05:20:08.968679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.392 [2024-12-09 05:20:08.973873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.392 [2024-12-09 05:20:08.974111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.392 [2024-12-09 05:20:08.974134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.392 [2024-12-09 05:20:08.979139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.392 [2024-12-09 05:20:08.979366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.392 [2024-12-09 05:20:08.979385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.392 [2024-12-09 05:20:08.984536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.392 [2024-12-09 05:20:08.984823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.392 [2024-12-09 05:20:08.984843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.392 [2024-12-09 05:20:08.990511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.392 [2024-12-09 05:20:08.990745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.392 [2024-12-09 05:20:08.990764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.392 [2024-12-09 05:20:08.995358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.392 [2024-12-09 05:20:08.995590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.392 [2024-12-09 05:20:08.995609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.392 [2024-12-09 05:20:08.999865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.392 [2024-12-09 05:20:09.000094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.392 [2024-12-09 05:20:09.000113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.392 [2024-12-09 05:20:09.004130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.392 [2024-12-09 05:20:09.004360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.392 [2024-12-09 05:20:09.004380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.392 [2024-12-09 05:20:09.008388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.392 [2024-12-09 05:20:09.008613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.392 [2024-12-09 05:20:09.008632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.392 [2024-12-09 05:20:09.012628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.392 [2024-12-09 05:20:09.012867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.392 [2024-12-09 05:20:09.012887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.392 [2024-12-09 05:20:09.016839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.392 [2024-12-09 05:20:09.017102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.392 [2024-12-09 05:20:09.017120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.392 [2024-12-09 05:20:09.021200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.392 [2024-12-09 05:20:09.021449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.392 [2024-12-09 05:20:09.021468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.392 [2024-12-09 05:20:09.025563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.392 [2024-12-09 05:20:09.025793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.392 [2024-12-09 05:20:09.025812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.392 [2024-12-09 05:20:09.029844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.392 [2024-12-09 05:20:09.030095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.392 [2024-12-09 05:20:09.030114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.652 [2024-12-09 05:20:09.034103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.652 [2024-12-09 05:20:09.034325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.652 [2024-12-09 05:20:09.034344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.652 [2024-12-09 05:20:09.038417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.652 [2024-12-09 05:20:09.038652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.652 [2024-12-09 05:20:09.038672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.652 [2024-12-09 05:20:09.042774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.652 [2024-12-09 05:20:09.043005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.652 [2024-12-09 05:20:09.043024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.652 [2024-12-09 05:20:09.046936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.652 [2024-12-09 05:20:09.047190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.652 [2024-12-09 05:20:09.047209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.652 [2024-12-09 05:20:09.051144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.652 [2024-12-09 05:20:09.051387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.652 [2024-12-09 05:20:09.051406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.652 [2024-12-09 05:20:09.056267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.652 [2024-12-09 05:20:09.056569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.652 [2024-12-09 05:20:09.056588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.652 [2024-12-09 05:20:09.062744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.652 [2024-12-09 05:20:09.063047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.652 [2024-12-09 05:20:09.063066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.652 [2024-12-09 05:20:09.068339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.652 [2024-12-09 05:20:09.068598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.652 [2024-12-09 05:20:09.068617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.652 [2024-12-09 05:20:09.073658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.652 [2024-12-09 05:20:09.073894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.652 [2024-12-09 05:20:09.073914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.653 [2024-12-09 05:20:09.079195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.653 [2024-12-09 05:20:09.079441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.653 [2024-12-09 05:20:09.079460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.653 [2024-12-09 05:20:09.084418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.653 [2024-12-09 05:20:09.084653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.653 [2024-12-09 05:20:09.084672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.653 5550.00 IOPS, 693.75 MiB/s [2024-12-09T04:20:09.299Z] [2024-12-09 05:20:09.090433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.653 [2024-12-09 05:20:09.090690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.653 [2024-12-09 05:20:09.090709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.653 [2024-12-09 05:20:09.094868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.653 [2024-12-09 05:20:09.095109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.653 [2024-12-09 05:20:09.095129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.653 [2024-12-09 05:20:09.099177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.653 [2024-12-09 05:20:09.099436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.653 [2024-12-09 05:20:09.099459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.653 [2024-12-09 05:20:09.103479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.653 [2024-12-09 05:20:09.103712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.653 [2024-12-09 05:20:09.103732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.653 [2024-12-09 05:20:09.107821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.653 [2024-12-09 05:20:09.108076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.653 [2024-12-09 05:20:09.108095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.653 [2024-12-09 05:20:09.113130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.653 [2024-12-09 05:20:09.113458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.653 [2024-12-09 05:20:09.113478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.653 [2024-12-09 05:20:09.119488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.653 [2024-12-09 05:20:09.119738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.653 [2024-12-09 05:20:09.119758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.653 [2024-12-09 05:20:09.124782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.653 [2024-12-09 05:20:09.125046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.653 [2024-12-09 05:20:09.125065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.653 [2024-12-09 05:20:09.130700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.653 [2024-12-09 05:20:09.130964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.653 [2024-12-09 05:20:09.130983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.653 [2024-12-09 05:20:09.136542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.653 [2024-12-09 05:20:09.136792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.653 [2024-12-09 05:20:09.136810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.653 [2024-12-09 05:20:09.142474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.653 [2024-12-09 05:20:09.142728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.653 [2024-12-09 05:20:09.142746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.653 [2024-12-09 05:20:09.148211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.653 [2024-12-09 05:20:09.148451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.653 [2024-12-09 05:20:09.148469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.653 [2024-12-09 05:20:09.154358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.653 [2024-12-09 05:20:09.154691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.653 [2024-12-09 05:20:09.154711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.653 [2024-12-09 05:20:09.161054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.653 [2024-12-09 05:20:09.161296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.653 [2024-12-09 05:20:09.161316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.653 [2024-12-09 05:20:09.166865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.653 [2024-12-09 05:20:09.167182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.653 [2024-12-09 05:20:09.167201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.653 [2024-12-09 05:20:09.172681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.653 [2024-12-09 05:20:09.173006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.653 [2024-12-09 05:20:09.173026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.653 [2024-12-09 05:20:09.178934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.653 [2024-12-09 05:20:09.179166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.653 [2024-12-09 05:20:09.179184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.653 [2024-12-09 05:20:09.185085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.653 [2024-12-09 05:20:09.185319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.653 [2024-12-09 05:20:09.185338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.653 [2024-12-09 05:20:09.190852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.653 [2024-12-09 05:20:09.191090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.653 [2024-12-09 05:20:09.191111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.654 [2024-12-09 05:20:09.196372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.654 [2024-12-09 05:20:09.196622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.654 [2024-12-09 05:20:09.196642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.654 [2024-12-09 05:20:09.201958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.654 [2024-12-09 05:20:09.202196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.654 [2024-12-09 05:20:09.202215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.654 [2024-12-09 05:20:09.207263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.654 [2024-12-09 05:20:09.207506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.654 [2024-12-09 05:20:09.207525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.654 [2024-12-09 05:20:09.212485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.654 [2024-12-09 05:20:09.212714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.654 [2024-12-09 05:20:09.212734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.654 [2024-12-09 05:20:09.217580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.654 [2024-12-09 05:20:09.217816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.654 [2024-12-09 05:20:09.217835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.654 [2024-12-09 05:20:09.224010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.654 [2024-12-09 05:20:09.224249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.654 [2024-12-09 05:20:09.224268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.654 [2024-12-09 05:20:09.229419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.654 [2024-12-09 05:20:09.229661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.654 [2024-12-09 05:20:09.229680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.654 [2024-12-09 05:20:09.234428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.654 [2024-12-09 05:20:09.234683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.654 [2024-12-09 05:20:09.234702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.654 [2024-12-09 05:20:09.239355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.654 [2024-12-09 05:20:09.239595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.654 [2024-12-09 05:20:09.239614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.654 [2024-12-09 05:20:09.243949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.654 [2024-12-09 05:20:09.244187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.654 [2024-12-09 05:20:09.244209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.654 [2024-12-09 05:20:09.248570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.654 [2024-12-09 05:20:09.248794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.654 [2024-12-09 05:20:09.248812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.654 [2024-12-09 05:20:09.253196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.654 [2024-12-09 05:20:09.253415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.654 [2024-12-09 05:20:09.253433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.654 [2024-12-09 05:20:09.257951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.654 [2024-12-09 05:20:09.258197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.654 [2024-12-09 05:20:09.258215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.654 [2024-12-09 05:20:09.262618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.654 [2024-12-09 05:20:09.262879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.654 [2024-12-09 05:20:09.262898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.654 [2024-12-09 05:20:09.267447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.654 [2024-12-09 05:20:09.267686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.654 [2024-12-09 05:20:09.267705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.654 [2024-12-09 05:20:09.272670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.654 [2024-12-09 05:20:09.272902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.654 [2024-12-09 05:20:09.272920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.654 [2024-12-09 05:20:09.277427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.654 [2024-12-09 05:20:09.277661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.654 [2024-12-09 05:20:09.277679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.654 [2024-12-09 05:20:09.282132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.654 [2024-12-09 05:20:09.282370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.654 [2024-12-09 05:20:09.282389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.654 [2024-12-09 05:20:09.287329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.654 [2024-12-09 05:20:09.287559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.654 [2024-12-09 05:20:09.287579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.654 [2024-12-09 05:20:09.291924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.654 [2024-12-09 05:20:09.292167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.654 [2024-12-09 05:20:09.292187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.914 [2024-12-09 05:20:09.296485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.914 [2024-12-09 05:20:09.296717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.914 [2024-12-09 05:20:09.296736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.914 [2024-12-09 05:20:09.301444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.914 [2024-12-09 05:20:09.301683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.914 [2024-12-09 05:20:09.301702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.914 [2024-12-09 05:20:09.306644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.914 [2024-12-09 05:20:09.306917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.914 [2024-12-09 05:20:09.306936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.914 [2024-12-09 05:20:09.311377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.914 [2024-12-09 05:20:09.311599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.914 [2024-12-09 05:20:09.311618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.914 [2024-12-09 05:20:09.315907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.914 [2024-12-09 05:20:09.316132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.914 [2024-12-09 05:20:09.316151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.914 [2024-12-09 05:20:09.320498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.914 [2024-12-09 05:20:09.320722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.914 [2024-12-09 05:20:09.320741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.914 [2024-12-09 05:20:09.325575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.914 [2024-12-09 05:20:09.325833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.914 [2024-12-09 05:20:09.325852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.914 [2024-12-09 05:20:09.330532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.914 [2024-12-09 05:20:09.330784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.914 [2024-12-09 05:20:09.330802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.914 [2024-12-09 05:20:09.335242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.914 [2024-12-09 05:20:09.335466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.914 [2024-12-09 05:20:09.335484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.914 [2024-12-09 05:20:09.339777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.914 [2024-12-09 05:20:09.340025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.914 [2024-12-09 05:20:09.340043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.914 [2024-12-09 05:20:09.344537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.914 [2024-12-09 05:20:09.344759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.914 [2024-12-09 05:20:09.344778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.914 [2024-12-09 05:20:09.349336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.914 [2024-12-09 05:20:09.349583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.914 [2024-12-09 05:20:09.349601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.914 [2024-12-09 05:20:09.354021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.914 [2024-12-09 05:20:09.354265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.914 [2024-12-09 05:20:09.354283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.914 [2024-12-09 05:20:09.358952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.914 [2024-12-09 05:20:09.359191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.914 [2024-12-09 05:20:09.359209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.914 [2024-12-09 05:20:09.363680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.914 [2024-12-09 05:20:09.363924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.914 [2024-12-09 05:20:09.363943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.368313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.368563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.368586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.373034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.373277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.373296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.377668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.377918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.377937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.382461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.382677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.382696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.387376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.387617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.387636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.391825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.392082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.392101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.396453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.396702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.396722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.401120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.401351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.401369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.405975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.406226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.406245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.410889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.411142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.411162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.415656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.415898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.415916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.420217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.420438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.420456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.424747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.424973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.424992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.429406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.429632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.429650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.433948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.434178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.434197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.438470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.438691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.438710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.442928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.443174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.443193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.447570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.447816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.447835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.452218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.452437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.452456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.456985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.457217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.457236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.461579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.461815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.461834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.466370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.466594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.466612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.471318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.471571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.471590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.476080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.476222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.476241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.480685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.480897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.480915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.485224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.485453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.485472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.489744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.915 [2024-12-09 05:20:09.489976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.915 [2024-12-09 05:20:09.490013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.915 [2024-12-09 05:20:09.494487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.916 [2024-12-09 05:20:09.494721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.916 [2024-12-09 05:20:09.494740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.916 [2024-12-09 05:20:09.499138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.916 [2024-12-09 05:20:09.499381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.916 [2024-12-09 05:20:09.499399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.916 [2024-12-09 05:20:09.503937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.916 [2024-12-09 05:20:09.504182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.916 [2024-12-09 05:20:09.504201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.916 [2024-12-09 05:20:09.508708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.916 [2024-12-09 05:20:09.508940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.916 [2024-12-09 05:20:09.508959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.916 [2024-12-09 05:20:09.513484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.916 [2024-12-09 05:20:09.513749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.916 [2024-12-09 05:20:09.513768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.916 [2024-12-09 05:20:09.518362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.916 [2024-12-09 05:20:09.518581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.916 [2024-12-09 05:20:09.518600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.916 [2024-12-09 05:20:09.523210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.916 [2024-12-09 05:20:09.523457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.916 [2024-12-09 05:20:09.523476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.916 [2024-12-09 05:20:09.528019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.916 [2024-12-09 05:20:09.528281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.916 [2024-12-09 05:20:09.528300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.916 [2024-12-09 05:20:09.533790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.916 [2024-12-09 05:20:09.534094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.916 [2024-12-09 05:20:09.534113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.916 [2024-12-09 05:20:09.540371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.916 [2024-12-09 05:20:09.540653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.916 [2024-12-09 05:20:09.540672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.916 [2024-12-09 05:20:09.547644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.916 [2024-12-09 05:20:09.548019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.916 [2024-12-09 05:20:09.548054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.916 [2024-12-09 05:20:09.555916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:32.916 [2024-12-09 05:20:09.556173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.916 [2024-12-09 05:20:09.556193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.175 [2024-12-09 05:20:09.563396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.563769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.563788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.571114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.571438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.571457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.578980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.579312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.579331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.586700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.586986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.587012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.594455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.594805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.594824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.602131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.602462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.602481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.609356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.609676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.609695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.617086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.617437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.617457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.623961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.624280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.624299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.629619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.629900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.629920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.635354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.635631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.635650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.641166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.641458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.641477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.647724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.648021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.648055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.654246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.654490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.654514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.660479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.660780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.660799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.666460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.666701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.666720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.672085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.672371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.672390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.677542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.677778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.677798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.683223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.683475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.683494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.687842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.688095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.688115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.692864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.693103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.693122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.698315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.698537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.698555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.703298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.703535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.703554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.708119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.708374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.708393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.712969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.713212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.713231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.717394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.717620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.176 [2024-12-09 05:20:09.717639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.176 [2024-12-09 05:20:09.721613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.176 [2024-12-09 05:20:09.721836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.177 [2024-12-09 05:20:09.721855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.177 [2024-12-09 05:20:09.725801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.177 [2024-12-09 05:20:09.726029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.177 [2024-12-09 05:20:09.726048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.177 [2024-12-09 05:20:09.729990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.177 [2024-12-09 05:20:09.730225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.177 [2024-12-09 05:20:09.730244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.177 [2024-12-09 05:20:09.734112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.177 [2024-12-09 05:20:09.734334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.177 [2024-12-09 05:20:09.734353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.177 [2024-12-09 05:20:09.738222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.177 [2024-12-09 05:20:09.738452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.177 [2024-12-09 05:20:09.738470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.177 [2024-12-09 05:20:09.742338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.177 [2024-12-09 05:20:09.742565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.177 [2024-12-09 05:20:09.742584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.177 [2024-12-09 05:20:09.746902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.177 [2024-12-09 05:20:09.747140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.177 [2024-12-09 05:20:09.747158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.177 [2024-12-09 05:20:09.751227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.177 [2024-12-09 05:20:09.751439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.177 [2024-12-09 05:20:09.751458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.177 [2024-12-09 05:20:09.755730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.177 [2024-12-09 05:20:09.755955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.177 [2024-12-09 05:20:09.755974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.177 [2024-12-09 05:20:09.761152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.177 [2024-12-09 05:20:09.761430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.177 [2024-12-09 05:20:09.761449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.177 [2024-12-09 05:20:09.766459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.177 [2024-12-09 05:20:09.766672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.177 [2024-12-09 05:20:09.766691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.177 [2024-12-09 05:20:09.771735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.177 [2024-12-09 05:20:09.771955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.177 [2024-12-09 05:20:09.771973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.177 [2024-12-09 05:20:09.777102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.177 [2024-12-09 05:20:09.777355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.177 [2024-12-09 05:20:09.777374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.177 [2024-12-09 05:20:09.782541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.177 [2024-12-09 05:20:09.782769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.177 [2024-12-09 05:20:09.782791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.177 [2024-12-09 05:20:09.788299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.177 [2024-12-09 05:20:09.788556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.177 [2024-12-09 05:20:09.788574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.177 [2024-12-09 05:20:09.793729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.177 [2024-12-09 05:20:09.794205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.177 [2024-12-09 05:20:09.794224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.177 [2024-12-09 05:20:09.799423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.177 [2024-12-09 05:20:09.799650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.177 [2024-12-09 05:20:09.799669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.177 [2024-12-09 05:20:09.804950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.177 [2024-12-09 05:20:09.805202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.177 [2024-12-09 05:20:09.805221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.177 [2024-12-09 05:20:09.810334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.177 [2024-12-09 05:20:09.810554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.177 [2024-12-09 05:20:09.810573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.177 [2024-12-09 05:20:09.815629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.177 [2024-12-09 05:20:09.815858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.177 [2024-12-09 05:20:09.815877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.821161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.821414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.821433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.826796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.827034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.827054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.832255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.832465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.832484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.838254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.838486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.838505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.844167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.844397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.844417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.849424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.849839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.849858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.855387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.855617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.855635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.860188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.860422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.860441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.864714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.864934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.864959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.869066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.869292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.869312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.873295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.873534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.873553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.877557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.877786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.877806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.881775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.882024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.882042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.886023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.886252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.886271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.890262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.890494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.890513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.894448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.894678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.894698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.898635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.898869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.898887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.902808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.903053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.903072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.906980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.907224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.907243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.911194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.911427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.911450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.915376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.915606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.915625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.919517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.919749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.919768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.923635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.923863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.923882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.927784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.928035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.928055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.931901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.932141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.438 [2024-12-09 05:20:09.932160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.438 [2024-12-09 05:20:09.936035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.438 [2024-12-09 05:20:09.936266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:09.936285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:09.940188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:09.940430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:09.940450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:09.944252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:09.944472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:09.944492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:09.948242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:09.948451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:09.948470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:09.952257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:09.952474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:09.952493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:09.956252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:09.956459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:09.956478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:09.960212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:09.960416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:09.960435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:09.964173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:09.964390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:09.964410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:09.968101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:09.968332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:09.968352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:09.972056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:09.972280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:09.972299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:09.975981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:09.976230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:09.976249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:09.979894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:09.980127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:09.980154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:09.983845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:09.984078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:09.984097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:09.987753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:09.987974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:09.987994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:09.991700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:09.991911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:09.991930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:09.995649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:09.995868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:09.995887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:09.999602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:09.999817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:09.999836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:10.003600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:10.003820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:10.003839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:10.007560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:10.007773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:10.007792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:10.011546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:10.011758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:10.011778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:10.015495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:10.015711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:10.015734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:10.019565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:10.019861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:10.019888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:10.024643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:10.024822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:10.024844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:10.028627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:10.028817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:10.028835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:10.032607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:10.032795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:10.032814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:10.036585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:10.036761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:10.036779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:10.040600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:10.040787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:10.040805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.439 [2024-12-09 05:20:10.044567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.439 [2024-12-09 05:20:10.044755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.439 [2024-12-09 05:20:10.044772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.440 [2024-12-09 05:20:10.048574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.440 [2024-12-09 05:20:10.048762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.440 [2024-12-09 05:20:10.048780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.440 [2024-12-09 05:20:10.052546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.440 [2024-12-09 05:20:10.052723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.440 [2024-12-09 05:20:10.052741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.440 [2024-12-09 05:20:10.057222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.440 [2024-12-09 05:20:10.057447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.440 [2024-12-09 05:20:10.057474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.440 [2024-12-09 05:20:10.061338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.440 [2024-12-09 05:20:10.061568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.440 [2024-12-09 05:20:10.061593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.440 [2024-12-09 05:20:10.065324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.440 [2024-12-09 05:20:10.065541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.440 [2024-12-09 05:20:10.065566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.440 [2024-12-09 05:20:10.069369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.440 [2024-12-09 05:20:10.069598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.440 [2024-12-09 05:20:10.069621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.440 [2024-12-09 05:20:10.073381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.440 [2024-12-09 05:20:10.073617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.440 [2024-12-09 05:20:10.073640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.440 [2024-12-09 05:20:10.077371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.440 [2024-12-09 05:20:10.077606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.440 [2024-12-09 05:20:10.077631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.698 [2024-12-09 05:20:10.081355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.698 [2024-12-09 05:20:10.081595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.698 [2024-12-09 05:20:10.081617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.698 [2024-12-09 05:20:10.085372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151d4c0) with pdu=0x200016eff3c8 00:25:33.698 [2024-12-09 05:20:10.085605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.698 [2024-12-09 05:20:10.085630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.698 5912.50 IOPS, 739.06 MiB/s 00:25:33.698 Latency(us) 00:25:33.698 [2024-12-09T04:20:10.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.698 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:33.698 nvme0n1 : 2.00 5911.03 738.88 0.00 0.00 2702.71 1866.35 8833.11 00:25:33.698 [2024-12-09T04:20:10.344Z] =================================================================================================================== 00:25:33.698 [2024-12-09T04:20:10.344Z] Total : 5911.03 738.88 0.00 0.00 2702.71 1866.35 8833.11 00:25:33.698 { 00:25:33.698 "results": [ 00:25:33.698 { 00:25:33.698 "job": "nvme0n1", 00:25:33.698 "core_mask": "0x2", 00:25:33.698 "workload": "randwrite", 00:25:33.698 "status": "finished", 00:25:33.698 "queue_depth": 16, 00:25:33.698 "io_size": 131072, 00:25:33.698 "runtime": 2.003205, 00:25:33.698 "iops": 5911.027578305765, 00:25:33.698 "mibps": 738.8784472882206, 00:25:33.698 "io_failed": 0, 00:25:33.698 "io_timeout": 0, 00:25:33.698 "avg_latency_us": 2702.7063677788674, 00:25:33.698 "min_latency_us": 1866.351304347826, 00:25:33.698 "max_latency_us": 8833.11304347826 00:25:33.698 } 00:25:33.698 ], 00:25:33.698 "core_count": 1 00:25:33.698 } 00:25:33.698 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:33.698 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:33.698 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:33.698 | .driver_specific 00:25:33.698 | .nvme_error 00:25:33.698 | .status_code 00:25:33.698 | .command_transient_transport_error' 00:25:33.698 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:33.698 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 382 > 0 )) 00:25:33.698 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3727634 00:25:33.698 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3727634 ']' 00:25:33.698 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3727634 00:25:33.698 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:33.698 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:33.698 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3727634 00:25:33.957 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:33.957 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:33.957 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3727634' 00:25:33.957 killing process with pid 3727634 00:25:33.957 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3727634 00:25:33.957 Received shutdown signal, test time was about 2.000000 seconds 00:25:33.957 00:25:33.957 Latency(us) 00:25:33.957 [2024-12-09T04:20:10.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.957 [2024-12-09T04:20:10.603Z] =================================================================================================================== 00:25:33.957 [2024-12-09T04:20:10.603Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:33.957 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3727634 00:25:33.957 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3725909 00:25:33.957 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3725909 ']' 00:25:33.957 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3725909 00:25:33.957 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:33.957 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:33.957 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3725909 00:25:33.957 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:33.957 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:33.957 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3725909' 00:25:33.957 killing process with pid 3725909 00:25:33.957 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3725909 00:25:33.957 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3725909 00:25:34.217 00:25:34.217 real 0m14.096s 00:25:34.217 user 0m27.060s 00:25:34.217 sys 0m4.318s 00:25:34.217 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:34.217 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:34.217 ************************************ 00:25:34.217 END TEST nvmf_digest_error 00:25:34.217 ************************************ 00:25:34.217 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:34.217 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:34.217 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:34.217 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:25:34.217 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:34.217 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:25:34.217 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:34.217 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:34.217 rmmod nvme_tcp 00:25:34.217 rmmod nvme_fabrics 00:25:34.217 rmmod nvme_keyring 00:25:34.476 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:34.476 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:25:34.476 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:25:34.477 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3725909 ']' 00:25:34.477 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3725909 00:25:34.477 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3725909 ']' 00:25:34.477 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3725909 00:25:34.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3725909) - No such process 00:25:34.477 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3725909 is not found' 00:25:34.477 Process with pid 3725909 is not found 00:25:34.477 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:34.477 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:34.477 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:34.477 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:25:34.477 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:25:34.477 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:34.477 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:25:34.477 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:34.477 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:34.477 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.477 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.477 05:20:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.378 05:20:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:36.378 00:25:36.378 real 0m36.241s 00:25:36.378 user 0m55.606s 00:25:36.378 sys 0m13.020s 00:25:36.378 05:20:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:36.378 05:20:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:36.378 ************************************ 00:25:36.378 END TEST nvmf_digest 00:25:36.378 ************************************ 00:25:36.378 05:20:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:25:36.378 05:20:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:25:36.378 05:20:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:25:36.378 05:20:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:36.378 05:20:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:36.378 05:20:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:36.378 05:20:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.637 ************************************ 00:25:36.637 START TEST nvmf_bdevperf 00:25:36.637 ************************************ 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:36.637 * Looking for test storage... 00:25:36.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:25:36.637 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:36.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.638 --rc genhtml_branch_coverage=1 00:25:36.638 --rc genhtml_function_coverage=1 00:25:36.638 --rc genhtml_legend=1 00:25:36.638 --rc geninfo_all_blocks=1 00:25:36.638 --rc geninfo_unexecuted_blocks=1 00:25:36.638 00:25:36.638 ' 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:36.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.638 --rc genhtml_branch_coverage=1 00:25:36.638 --rc genhtml_function_coverage=1 00:25:36.638 --rc genhtml_legend=1 00:25:36.638 --rc geninfo_all_blocks=1 00:25:36.638 --rc geninfo_unexecuted_blocks=1 00:25:36.638 00:25:36.638 ' 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:36.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.638 --rc genhtml_branch_coverage=1 00:25:36.638 --rc genhtml_function_coverage=1 00:25:36.638 --rc genhtml_legend=1 00:25:36.638 --rc geninfo_all_blocks=1 00:25:36.638 --rc geninfo_unexecuted_blocks=1 00:25:36.638 00:25:36.638 ' 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:36.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.638 --rc genhtml_branch_coverage=1 00:25:36.638 --rc genhtml_function_coverage=1 00:25:36.638 --rc genhtml_legend=1 00:25:36.638 --rc geninfo_all_blocks=1 00:25:36.638 --rc geninfo_unexecuted_blocks=1 00:25:36.638 00:25:36.638 ' 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:36.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:36.638 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:36.639 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.639 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:36.639 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:36.639 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:36.639 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.639 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:36.639 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.639 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:36.639 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:36.639 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:36.639 05:20:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:43.216 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:43.216 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:43.217 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:43.217 Found net devices under 0000:86:00.0: cvl_0_0 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:43.217 Found net devices under 0000:86:00.1: cvl_0_1 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:43.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:43.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:25:43.217 00:25:43.217 --- 10.0.0.2 ping statistics --- 00:25:43.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.217 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:43.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:43.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:25:43.217 00:25:43.217 --- 10.0.0.1 ping statistics --- 00:25:43.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.217 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:43.217 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:43.218 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:43.218 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3731790 00:25:43.218 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3731790 00:25:43.218 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:43.218 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3731790 ']' 00:25:43.218 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.218 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:43.218 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.218 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:43.218 05:20:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:43.218 [2024-12-09 05:20:18.951460] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:25:43.218 [2024-12-09 05:20:18.951509] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:43.218 [2024-12-09 05:20:19.021259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:43.218 [2024-12-09 05:20:19.064167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:43.218 [2024-12-09 05:20:19.064205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:43.218 [2024-12-09 05:20:19.064213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:43.218 [2024-12-09 05:20:19.064219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:43.218 [2024-12-09 05:20:19.064224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:43.218 [2024-12-09 05:20:19.065482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:43.218 [2024-12-09 05:20:19.065510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:43.218 [2024-12-09 05:20:19.065512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:43.218 [2024-12-09 05:20:19.203790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:43.218 Malloc0 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:43.218 [2024-12-09 05:20:19.270953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:43.218 { 00:25:43.218 "params": { 00:25:43.218 "name": "Nvme$subsystem", 00:25:43.218 "trtype": "$TEST_TRANSPORT", 00:25:43.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:43.218 "adrfam": "ipv4", 00:25:43.218 "trsvcid": "$NVMF_PORT", 00:25:43.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:43.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:43.218 "hdgst": ${hdgst:-false}, 00:25:43.218 "ddgst": ${ddgst:-false} 00:25:43.218 }, 00:25:43.218 "method": "bdev_nvme_attach_controller" 00:25:43.218 } 00:25:43.218 EOF 00:25:43.218 )") 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:43.218 05:20:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:43.218 "params": { 00:25:43.218 "name": "Nvme1", 00:25:43.218 "trtype": "tcp", 00:25:43.218 "traddr": "10.0.0.2", 00:25:43.218 "adrfam": "ipv4", 00:25:43.218 "trsvcid": "4420", 00:25:43.218 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:43.218 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:43.218 "hdgst": false, 00:25:43.218 "ddgst": false 00:25:43.218 }, 00:25:43.218 "method": "bdev_nvme_attach_controller" 00:25:43.218 }' 00:25:43.218 [2024-12-09 05:20:19.322350] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:25:43.218 [2024-12-09 05:20:19.322393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731817 ] 00:25:43.218 [2024-12-09 05:20:19.387390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.218 [2024-12-09 05:20:19.429151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.218 Running I/O for 1 seconds... 00:25:44.153 10647.00 IOPS, 41.59 MiB/s 00:25:44.153 Latency(us) 00:25:44.153 [2024-12-09T04:20:20.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.153 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:44.153 Verification LBA range: start 0x0 length 0x4000 00:25:44.153 Nvme1n1 : 1.01 10693.36 41.77 0.00 0.00 11921.79 968.79 13677.08 00:25:44.153 [2024-12-09T04:20:20.799Z] =================================================================================================================== 00:25:44.153 [2024-12-09T04:20:20.799Z] Total : 10693.36 41.77 0.00 0.00 11921.79 968.79 13677.08 00:25:44.412 05:20:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3732050 00:25:44.412 05:20:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:44.412 05:20:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:44.412 05:20:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:44.412 05:20:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:44.412 05:20:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:44.412 05:20:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:44.412 05:20:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:44.412 { 00:25:44.412 "params": { 00:25:44.412 "name": "Nvme$subsystem", 00:25:44.412 "trtype": "$TEST_TRANSPORT", 00:25:44.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.412 "adrfam": "ipv4", 00:25:44.412 "trsvcid": "$NVMF_PORT", 00:25:44.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.412 "hdgst": ${hdgst:-false}, 00:25:44.412 "ddgst": ${ddgst:-false} 00:25:44.412 }, 00:25:44.412 "method": "bdev_nvme_attach_controller" 00:25:44.412 } 00:25:44.412 EOF 00:25:44.412 )") 00:25:44.412 05:20:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:44.412 05:20:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:44.412 05:20:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:44.412 05:20:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:44.412 "params": { 00:25:44.412 "name": "Nvme1", 00:25:44.412 "trtype": "tcp", 00:25:44.412 "traddr": "10.0.0.2", 00:25:44.412 "adrfam": "ipv4", 00:25:44.412 "trsvcid": "4420", 00:25:44.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:44.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:44.412 "hdgst": false, 00:25:44.412 "ddgst": false 00:25:44.412 }, 00:25:44.412 "method": "bdev_nvme_attach_controller" 00:25:44.412 }' 00:25:44.412 [2024-12-09 05:20:20.884438] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:25:44.412 [2024-12-09 05:20:20.884489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3732050 ] 00:25:44.412 [2024-12-09 05:20:20.949057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.412 [2024-12-09 05:20:20.987801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.686 Running I/O for 15 seconds... 00:25:46.997 10605.00 IOPS, 41.43 MiB/s [2024-12-09T04:20:23.903Z] 10724.00 IOPS, 41.89 MiB/s [2024-12-09T04:20:23.903Z] 05:20:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3731790 00:25:47.257 05:20:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:47.257 [2024-12-09 05:20:23.853316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.853987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.853995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.854120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.854131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.854137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.854146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.854153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.854161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.854167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.854176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.854183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.854191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.854197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.854206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.854212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.854221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.854228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.854237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.854243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.854251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.854259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.854267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.854273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.854282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.854288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.854297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.854303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.854311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.854320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.854328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.257 [2024-12-09 05:20:23.854335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.257 [2024-12-09 05:20:23.854344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.258 [2024-12-09 05:20:23.854351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.258 [2024-12-09 05:20:23.854366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.258 [2024-12-09 05:20:23.854380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.258 [2024-12-09 05:20:23.854395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.258 [2024-12-09 05:20:23.854410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.258 [2024-12-09 05:20:23.854424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.258 [2024-12-09 05:20:23.854439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.258 [2024-12-09 05:20:23.854454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.258 [2024-12-09 05:20:23.854469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.258 [2024-12-09 05:20:23.854483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.854985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.854991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.258 [2024-12-09 05:20:23.855447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.855455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103e6c0 is same with the state(6) to be set 00:25:47.258 [2024-12-09 05:20:23.855464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.258 [2024-12-09 05:20:23.855472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.258 [2024-12-09 05:20:23.855478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85416 len:8 PRP1 0x0 PRP2 0x0 00:25:47.258 [2024-12-09 05:20:23.855485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.258 [2024-12-09 05:20:23.858387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.259 [2024-12-09 05:20:23.858440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.259 [2024-12-09 05:20:23.859069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.259 [2024-12-09 05:20:23.859087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.259 [2024-12-09 05:20:23.859095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.259 [2024-12-09 05:20:23.859275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.259 [2024-12-09 05:20:23.859453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.259 [2024-12-09 05:20:23.859461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.259 [2024-12-09 05:20:23.859469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.259 [2024-12-09 05:20:23.859477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.259 [2024-12-09 05:20:23.871737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.259 [2024-12-09 05:20:23.872182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.259 [2024-12-09 05:20:23.872201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.259 [2024-12-09 05:20:23.872209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.259 [2024-12-09 05:20:23.872388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.259 [2024-12-09 05:20:23.872567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.259 [2024-12-09 05:20:23.872576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.259 [2024-12-09 05:20:23.872583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.259 [2024-12-09 05:20:23.872590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.259 [2024-12-09 05:20:23.884642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.259 [2024-12-09 05:20:23.885094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.259 [2024-12-09 05:20:23.885140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.259 [2024-12-09 05:20:23.885165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.259 [2024-12-09 05:20:23.885611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.259 [2024-12-09 05:20:23.885785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.259 [2024-12-09 05:20:23.885793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.259 [2024-12-09 05:20:23.885799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.259 [2024-12-09 05:20:23.885806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.259 [2024-12-09 05:20:23.897813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.259 [2024-12-09 05:20:23.898274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.259 [2024-12-09 05:20:23.898292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.259 [2024-12-09 05:20:23.898300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.259 [2024-12-09 05:20:23.898478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.259 [2024-12-09 05:20:23.898657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.259 [2024-12-09 05:20:23.898665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.259 [2024-12-09 05:20:23.898672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.259 [2024-12-09 05:20:23.898679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.518 [2024-12-09 05:20:23.910738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.518 [2024-12-09 05:20:23.911178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.518 [2024-12-09 05:20:23.911216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.518 [2024-12-09 05:20:23.911241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.518 [2024-12-09 05:20:23.911824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.518 [2024-12-09 05:20:23.912116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.518 [2024-12-09 05:20:23.912124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.519 [2024-12-09 05:20:23.912131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.519 [2024-12-09 05:20:23.912138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.519 [2024-12-09 05:20:23.924322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.519 [2024-12-09 05:20:23.924769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.519 [2024-12-09 05:20:23.924785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.519 [2024-12-09 05:20:23.924792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.519 [2024-12-09 05:20:23.924965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.519 [2024-12-09 05:20:23.925174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.519 [2024-12-09 05:20:23.925184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.519 [2024-12-09 05:20:23.925190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.519 [2024-12-09 05:20:23.925196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.519 [2024-12-09 05:20:23.937301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.519 [2024-12-09 05:20:23.937719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.519 [2024-12-09 05:20:23.937738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.519 [2024-12-09 05:20:23.937745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.519 [2024-12-09 05:20:23.937909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.519 [2024-12-09 05:20:23.938097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.519 [2024-12-09 05:20:23.938106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.519 [2024-12-09 05:20:23.938113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.519 [2024-12-09 05:20:23.938119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.519 [2024-12-09 05:20:23.950121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.519 [2024-12-09 05:20:23.950562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.519 [2024-12-09 05:20:23.950579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.519 [2024-12-09 05:20:23.950586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.519 [2024-12-09 05:20:23.950759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.519 [2024-12-09 05:20:23.950932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.519 [2024-12-09 05:20:23.950939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.519 [2024-12-09 05:20:23.950946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.519 [2024-12-09 05:20:23.950952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.519 [2024-12-09 05:20:23.963059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.519 [2024-12-09 05:20:23.963368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.519 [2024-12-09 05:20:23.963384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.519 [2024-12-09 05:20:23.963392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.519 [2024-12-09 05:20:23.963566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.519 [2024-12-09 05:20:23.963739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.519 [2024-12-09 05:20:23.963747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.519 [2024-12-09 05:20:23.963753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.519 [2024-12-09 05:20:23.963760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.519 [2024-12-09 05:20:23.975951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.519 [2024-12-09 05:20:23.976332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.519 [2024-12-09 05:20:23.976349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.519 [2024-12-09 05:20:23.976356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.519 [2024-12-09 05:20:23.976534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.519 [2024-12-09 05:20:23.976707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.519 [2024-12-09 05:20:23.976716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.519 [2024-12-09 05:20:23.976723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.519 [2024-12-09 05:20:23.976729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.519 [2024-12-09 05:20:23.988918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.519 [2024-12-09 05:20:23.989396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.519 [2024-12-09 05:20:23.989413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.519 [2024-12-09 05:20:23.989421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.519 [2024-12-09 05:20:23.989600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.519 [2024-12-09 05:20:23.989778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.519 [2024-12-09 05:20:23.989787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.519 [2024-12-09 05:20:23.989794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.519 [2024-12-09 05:20:23.989801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.519 [2024-12-09 05:20:24.001912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.519 [2024-12-09 05:20:24.002353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.519 [2024-12-09 05:20:24.002370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.519 [2024-12-09 05:20:24.002378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.519 [2024-12-09 05:20:24.002550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.519 [2024-12-09 05:20:24.002724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.519 [2024-12-09 05:20:24.002732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.519 [2024-12-09 05:20:24.002739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.519 [2024-12-09 05:20:24.002745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.519 [2024-12-09 05:20:24.014930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.519 [2024-12-09 05:20:24.015338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.519 [2024-12-09 05:20:24.015355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.519 [2024-12-09 05:20:24.015362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.519 [2024-12-09 05:20:24.015536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.519 [2024-12-09 05:20:24.015709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.519 [2024-12-09 05:20:24.015717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.519 [2024-12-09 05:20:24.015727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.519 [2024-12-09 05:20:24.015734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.519 [2024-12-09 05:20:24.028051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.519 [2024-12-09 05:20:24.028505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.519 [2024-12-09 05:20:24.028522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.519 [2024-12-09 05:20:24.028530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.519 [2024-12-09 05:20:24.028708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.519 [2024-12-09 05:20:24.028887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.519 [2024-12-09 05:20:24.028895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.519 [2024-12-09 05:20:24.028902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.519 [2024-12-09 05:20:24.028909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.519 [2024-12-09 05:20:24.041189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.519 [2024-12-09 05:20:24.041578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.519 [2024-12-09 05:20:24.041595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.519 [2024-12-09 05:20:24.041602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.519 [2024-12-09 05:20:24.041781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.520 [2024-12-09 05:20:24.041958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.520 [2024-12-09 05:20:24.041966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.520 [2024-12-09 05:20:24.041973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.520 [2024-12-09 05:20:24.041980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.520 [2024-12-09 05:20:24.054273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.520 [2024-12-09 05:20:24.054755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.520 [2024-12-09 05:20:24.054802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.520 [2024-12-09 05:20:24.054825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.520 [2024-12-09 05:20:24.055313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.520 [2024-12-09 05:20:24.055493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.520 [2024-12-09 05:20:24.055502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.520 [2024-12-09 05:20:24.055509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.520 [2024-12-09 05:20:24.055516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.520 [2024-12-09 05:20:24.067281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.520 [2024-12-09 05:20:24.067588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.520 [2024-12-09 05:20:24.067605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.520 [2024-12-09 05:20:24.067612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.520 [2024-12-09 05:20:24.067785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.520 [2024-12-09 05:20:24.067957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.520 [2024-12-09 05:20:24.067965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.520 [2024-12-09 05:20:24.067972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.520 [2024-12-09 05:20:24.067978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.520 [2024-12-09 05:20:24.080250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.520 [2024-12-09 05:20:24.080579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.520 [2024-12-09 05:20:24.080624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.520 [2024-12-09 05:20:24.080647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.520 [2024-12-09 05:20:24.081161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.520 [2024-12-09 05:20:24.081340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.520 [2024-12-09 05:20:24.081348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.520 [2024-12-09 05:20:24.081355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.520 [2024-12-09 05:20:24.081362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.520 [2024-12-09 05:20:24.093141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.520 [2024-12-09 05:20:24.093518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.520 [2024-12-09 05:20:24.093534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.520 [2024-12-09 05:20:24.093541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.520 [2024-12-09 05:20:24.093714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.520 [2024-12-09 05:20:24.093887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.520 [2024-12-09 05:20:24.093894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.520 [2024-12-09 05:20:24.093901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.520 [2024-12-09 05:20:24.093907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.520 [2024-12-09 05:20:24.106138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.520 [2024-12-09 05:20:24.106496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.520 [2024-12-09 05:20:24.106516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.520 [2024-12-09 05:20:24.106524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.520 [2024-12-09 05:20:24.106703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.520 [2024-12-09 05:20:24.106881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.520 [2024-12-09 05:20:24.106890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.520 [2024-12-09 05:20:24.106897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.520 [2024-12-09 05:20:24.106904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.520 [2024-12-09 05:20:24.119268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.520 [2024-12-09 05:20:24.119704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.520 [2024-12-09 05:20:24.119722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.520 [2024-12-09 05:20:24.119730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.520 [2024-12-09 05:20:24.119909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.520 [2024-12-09 05:20:24.120095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.520 [2024-12-09 05:20:24.120105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.520 [2024-12-09 05:20:24.120112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.520 [2024-12-09 05:20:24.120118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.520 [2024-12-09 05:20:24.132504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.520 [2024-12-09 05:20:24.132817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.520 [2024-12-09 05:20:24.132834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.520 [2024-12-09 05:20:24.132843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.520 [2024-12-09 05:20:24.133032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.520 [2024-12-09 05:20:24.133211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.520 [2024-12-09 05:20:24.133219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.520 [2024-12-09 05:20:24.133226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.520 [2024-12-09 05:20:24.133233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.520 [2024-12-09 05:20:24.145602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.520 [2024-12-09 05:20:24.145972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.520 [2024-12-09 05:20:24.146031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.520 [2024-12-09 05:20:24.146056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.520 [2024-12-09 05:20:24.146576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.520 [2024-12-09 05:20:24.146755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.520 [2024-12-09 05:20:24.146764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.520 [2024-12-09 05:20:24.146772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.520 [2024-12-09 05:20:24.146778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.520 [2024-12-09 05:20:24.158791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.520 [2024-12-09 05:20:24.159250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.520 [2024-12-09 05:20:24.159267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.520 [2024-12-09 05:20:24.159274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.520 [2024-12-09 05:20:24.159453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.520 [2024-12-09 05:20:24.159632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.520 [2024-12-09 05:20:24.159640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.520 [2024-12-09 05:20:24.159647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.520 [2024-12-09 05:20:24.159654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.781 [2024-12-09 05:20:24.171919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.781 [2024-12-09 05:20:24.172307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.781 [2024-12-09 05:20:24.172324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.781 [2024-12-09 05:20:24.172331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.781 [2024-12-09 05:20:24.172504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.781 [2024-12-09 05:20:24.172676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.781 [2024-12-09 05:20:24.172685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.781 [2024-12-09 05:20:24.172691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.781 [2024-12-09 05:20:24.172697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.781 [2024-12-09 05:20:24.184845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.781 [2024-12-09 05:20:24.185221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.781 [2024-12-09 05:20:24.185239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.781 [2024-12-09 05:20:24.185246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.781 [2024-12-09 05:20:24.185418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.781 [2024-12-09 05:20:24.185592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.781 [2024-12-09 05:20:24.185600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.781 [2024-12-09 05:20:24.185610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.781 [2024-12-09 05:20:24.185617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.781 [2024-12-09 05:20:24.197893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.781 [2024-12-09 05:20:24.198280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.781 [2024-12-09 05:20:24.198308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.781 [2024-12-09 05:20:24.198315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.781 [2024-12-09 05:20:24.198488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.781 [2024-12-09 05:20:24.198660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.781 [2024-12-09 05:20:24.198668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.781 [2024-12-09 05:20:24.198674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.781 [2024-12-09 05:20:24.198680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.781 [2024-12-09 05:20:24.210906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.781 [2024-12-09 05:20:24.211274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.781 [2024-12-09 05:20:24.211321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.781 [2024-12-09 05:20:24.211345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.781 [2024-12-09 05:20:24.211825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.781 [2024-12-09 05:20:24.212008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.781 [2024-12-09 05:20:24.212017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.781 [2024-12-09 05:20:24.212023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.781 [2024-12-09 05:20:24.212030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.781 [2024-12-09 05:20:24.223949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.781 [2024-12-09 05:20:24.224337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.781 [2024-12-09 05:20:24.224354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.781 [2024-12-09 05:20:24.224361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.781 [2024-12-09 05:20:24.224534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.781 [2024-12-09 05:20:24.224706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.781 [2024-12-09 05:20:24.224715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.781 [2024-12-09 05:20:24.224721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.781 [2024-12-09 05:20:24.224727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.781 [2024-12-09 05:20:24.236972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.781 [2024-12-09 05:20:24.237308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.781 [2024-12-09 05:20:24.237326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.781 [2024-12-09 05:20:24.237333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.781 [2024-12-09 05:20:24.237507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.781 [2024-12-09 05:20:24.237678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.781 [2024-12-09 05:20:24.237686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.781 [2024-12-09 05:20:24.237693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.781 [2024-12-09 05:20:24.237699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.781 [2024-12-09 05:20:24.249893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.781 [2024-12-09 05:20:24.250348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.781 [2024-12-09 05:20:24.250365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.782 [2024-12-09 05:20:24.250372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.782 [2024-12-09 05:20:24.250545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.782 [2024-12-09 05:20:24.250718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.782 [2024-12-09 05:20:24.250726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.782 [2024-12-09 05:20:24.250733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.782 [2024-12-09 05:20:24.250739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.782 [2024-12-09 05:20:24.262843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.782 [2024-12-09 05:20:24.263225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.782 [2024-12-09 05:20:24.263243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.782 [2024-12-09 05:20:24.263250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.782 [2024-12-09 05:20:24.263423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.782 [2024-12-09 05:20:24.263595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.782 [2024-12-09 05:20:24.263603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.782 [2024-12-09 05:20:24.263610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.782 [2024-12-09 05:20:24.263616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.782 [2024-12-09 05:20:24.275866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.782 [2024-12-09 05:20:24.276293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.782 [2024-12-09 05:20:24.276347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.782 [2024-12-09 05:20:24.276370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.782 [2024-12-09 05:20:24.276954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.782 [2024-12-09 05:20:24.277373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.782 [2024-12-09 05:20:24.277382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.782 [2024-12-09 05:20:24.277388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.782 [2024-12-09 05:20:24.277394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.782 [2024-12-09 05:20:24.288823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.782 [2024-12-09 05:20:24.289133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.782 [2024-12-09 05:20:24.289150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.782 [2024-12-09 05:20:24.289158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.782 [2024-12-09 05:20:24.289342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.782 [2024-12-09 05:20:24.289515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.782 [2024-12-09 05:20:24.289523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.782 [2024-12-09 05:20:24.289530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.782 [2024-12-09 05:20:24.289536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.782 [2024-12-09 05:20:24.301741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.782 [2024-12-09 05:20:24.302085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.782 [2024-12-09 05:20:24.302101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.782 [2024-12-09 05:20:24.302109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.782 [2024-12-09 05:20:24.302286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.782 [2024-12-09 05:20:24.302449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.782 [2024-12-09 05:20:24.302457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.782 [2024-12-09 05:20:24.302463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.782 [2024-12-09 05:20:24.302469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.782 [2024-12-09 05:20:24.314724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.782 [2024-12-09 05:20:24.315058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.782 [2024-12-09 05:20:24.315074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.782 [2024-12-09 05:20:24.315082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.782 [2024-12-09 05:20:24.315257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.782 [2024-12-09 05:20:24.315434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.782 [2024-12-09 05:20:24.315442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.782 [2024-12-09 05:20:24.315449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.782 [2024-12-09 05:20:24.315455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.782 8978.00 IOPS, 35.07 MiB/s [2024-12-09T04:20:24.428Z] [2024-12-09 05:20:24.327648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.782 [2024-12-09 05:20:24.327954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.782 [2024-12-09 05:20:24.327970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.782 [2024-12-09 05:20:24.327978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.782 [2024-12-09 05:20:24.328156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.782 [2024-12-09 05:20:24.328329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.782 [2024-12-09 05:20:24.328337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.782 [2024-12-09 05:20:24.328344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.782 [2024-12-09 05:20:24.328350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.782 [2024-12-09 05:20:24.340634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.782 [2024-12-09 05:20:24.340947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.782 [2024-12-09 05:20:24.340963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.782 [2024-12-09 05:20:24.340970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.782 [2024-12-09 05:20:24.341148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.782 [2024-12-09 05:20:24.341322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.782 [2024-12-09 05:20:24.341330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.782 [2024-12-09 05:20:24.341337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.782 [2024-12-09 05:20:24.341343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.782 [2024-12-09 05:20:24.353639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.782 [2024-12-09 05:20:24.354051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.782 [2024-12-09 05:20:24.354095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.782 [2024-12-09 05:20:24.354118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.782 [2024-12-09 05:20:24.354557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.782 [2024-12-09 05:20:24.354731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.782 [2024-12-09 05:20:24.354743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.782 [2024-12-09 05:20:24.354749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.782 [2024-12-09 05:20:24.354756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.782 [2024-12-09 05:20:24.366630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.782 [2024-12-09 05:20:24.367024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.782 [2024-12-09 05:20:24.367042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.782 [2024-12-09 05:20:24.367050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.782 [2024-12-09 05:20:24.367238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.782 [2024-12-09 05:20:24.367412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.782 [2024-12-09 05:20:24.367421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.782 [2024-12-09 05:20:24.367427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.782 [2024-12-09 05:20:24.367434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.782 [2024-12-09 05:20:24.379713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.782 [2024-12-09 05:20:24.380099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.783 [2024-12-09 05:20:24.380117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.783 [2024-12-09 05:20:24.380124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.783 [2024-12-09 05:20:24.380302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.783 [2024-12-09 05:20:24.380481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.783 [2024-12-09 05:20:24.380490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.783 [2024-12-09 05:20:24.380497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.783 [2024-12-09 05:20:24.380503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.783 [2024-12-09 05:20:24.392750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.783 [2024-12-09 05:20:24.393077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.783 [2024-12-09 05:20:24.393095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.783 [2024-12-09 05:20:24.393102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.783 [2024-12-09 05:20:24.393288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.783 [2024-12-09 05:20:24.393461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.783 [2024-12-09 05:20:24.393469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.783 [2024-12-09 05:20:24.393476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.783 [2024-12-09 05:20:24.393482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.783 [2024-12-09 05:20:24.405769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.783 [2024-12-09 05:20:24.406100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.783 [2024-12-09 05:20:24.406118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.783 [2024-12-09 05:20:24.406126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.783 [2024-12-09 05:20:24.406310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.783 [2024-12-09 05:20:24.406483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.783 [2024-12-09 05:20:24.406491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.783 [2024-12-09 05:20:24.406497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.783 [2024-12-09 05:20:24.406503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.783 [2024-12-09 05:20:24.418757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.783 [2024-12-09 05:20:24.419204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.783 [2024-12-09 05:20:24.419222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:47.783 [2024-12-09 05:20:24.419229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:47.783 [2024-12-09 05:20:24.419420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:47.783 [2024-12-09 05:20:24.419598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.783 [2024-12-09 05:20:24.419607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.783 [2024-12-09 05:20:24.419613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.783 [2024-12-09 05:20:24.419620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.065 [2024-12-09 05:20:24.431884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.065 [2024-12-09 05:20:24.432195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-12-09 05:20:24.432212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.065 [2024-12-09 05:20:24.432219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.065 [2024-12-09 05:20:24.432392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.065 [2024-12-09 05:20:24.432565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.065 [2024-12-09 05:20:24.432573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.065 [2024-12-09 05:20:24.432580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.065 [2024-12-09 05:20:24.432586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.065 [2024-12-09 05:20:24.444751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.065 [2024-12-09 05:20:24.445142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-12-09 05:20:24.445195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.065 [2024-12-09 05:20:24.445219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.065 [2024-12-09 05:20:24.445802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.065 [2024-12-09 05:20:24.446347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.065 [2024-12-09 05:20:24.446356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.065 [2024-12-09 05:20:24.446362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.065 [2024-12-09 05:20:24.446368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.065 [2024-12-09 05:20:24.457803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.065 [2024-12-09 05:20:24.458185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-12-09 05:20:24.458201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.065 [2024-12-09 05:20:24.458209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.065 [2024-12-09 05:20:24.458382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.065 [2024-12-09 05:20:24.458554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.065 [2024-12-09 05:20:24.458562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.065 [2024-12-09 05:20:24.458568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.065 [2024-12-09 05:20:24.458574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.065 [2024-12-09 05:20:24.470781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.065 [2024-12-09 05:20:24.471159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-12-09 05:20:24.471176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.065 [2024-12-09 05:20:24.471184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.065 [2024-12-09 05:20:24.471356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.065 [2024-12-09 05:20:24.471530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.065 [2024-12-09 05:20:24.471538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.065 [2024-12-09 05:20:24.471544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.065 [2024-12-09 05:20:24.471551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.065 [2024-12-09 05:20:24.483704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.065 [2024-12-09 05:20:24.484128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-12-09 05:20:24.484173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.065 [2024-12-09 05:20:24.484196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.065 [2024-12-09 05:20:24.484786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.065 [2024-12-09 05:20:24.485400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.065 [2024-12-09 05:20:24.485409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.065 [2024-12-09 05:20:24.485415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.065 [2024-12-09 05:20:24.485422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.065 [2024-12-09 05:20:24.496650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.065 [2024-12-09 05:20:24.496965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-12-09 05:20:24.496981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.065 [2024-12-09 05:20:24.496989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.065 [2024-12-09 05:20:24.497166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.065 [2024-12-09 05:20:24.497339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.065 [2024-12-09 05:20:24.497347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.065 [2024-12-09 05:20:24.497354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.065 [2024-12-09 05:20:24.497360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.065 [2024-12-09 05:20:24.509583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.065 [2024-12-09 05:20:24.510049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.065 [2024-12-09 05:20:24.510066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.065 [2024-12-09 05:20:24.510073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.065 [2024-12-09 05:20:24.510446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.065 [2024-12-09 05:20:24.510623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.065 [2024-12-09 05:20:24.510632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.065 [2024-12-09 05:20:24.510639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.065 [2024-12-09 05:20:24.510645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.066 [2024-12-09 05:20:24.522571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.066 [2024-12-09 05:20:24.522957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-12-09 05:20:24.523020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.066 [2024-12-09 05:20:24.523046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.066 [2024-12-09 05:20:24.523468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.066 [2024-12-09 05:20:24.523642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.066 [2024-12-09 05:20:24.523655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.066 [2024-12-09 05:20:24.523662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.066 [2024-12-09 05:20:24.523668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.066 [2024-12-09 05:20:24.535668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.066 [2024-12-09 05:20:24.536052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-12-09 05:20:24.536070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.066 [2024-12-09 05:20:24.536077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.066 [2024-12-09 05:20:24.536257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.066 [2024-12-09 05:20:24.536421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.066 [2024-12-09 05:20:24.536429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.066 [2024-12-09 05:20:24.536435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.066 [2024-12-09 05:20:24.536441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.066 [2024-12-09 05:20:24.548642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.066 [2024-12-09 05:20:24.549078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-12-09 05:20:24.549112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.066 [2024-12-09 05:20:24.549137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.066 [2024-12-09 05:20:24.549694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.066 [2024-12-09 05:20:24.549869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.066 [2024-12-09 05:20:24.549877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.066 [2024-12-09 05:20:24.549883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.066 [2024-12-09 05:20:24.549890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.066 [2024-12-09 05:20:24.561446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.066 [2024-12-09 05:20:24.561906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-12-09 05:20:24.561923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.066 [2024-12-09 05:20:24.561930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.066 [2024-12-09 05:20:24.562109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.066 [2024-12-09 05:20:24.562282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.066 [2024-12-09 05:20:24.562290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.066 [2024-12-09 05:20:24.562296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.066 [2024-12-09 05:20:24.562303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.066 [2024-12-09 05:20:24.574296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.066 [2024-12-09 05:20:24.574659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-12-09 05:20:24.574675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.066 [2024-12-09 05:20:24.574682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.066 [2024-12-09 05:20:24.574846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.066 [2024-12-09 05:20:24.575015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.066 [2024-12-09 05:20:24.575039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.066 [2024-12-09 05:20:24.575046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.066 [2024-12-09 05:20:24.575053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.066 [2024-12-09 05:20:24.587156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.066 [2024-12-09 05:20:24.587613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-12-09 05:20:24.587629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.066 [2024-12-09 05:20:24.587636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.066 [2024-12-09 05:20:24.587799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.066 [2024-12-09 05:20:24.587962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.066 [2024-12-09 05:20:24.587970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.066 [2024-12-09 05:20:24.587976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.066 [2024-12-09 05:20:24.587981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.066 [2024-12-09 05:20:24.600050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.066 [2024-12-09 05:20:24.600496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-12-09 05:20:24.600544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.066 [2024-12-09 05:20:24.600567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.066 [2024-12-09 05:20:24.601166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.066 [2024-12-09 05:20:24.601649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.066 [2024-12-09 05:20:24.601657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.066 [2024-12-09 05:20:24.601663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.066 [2024-12-09 05:20:24.601670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.066 [2024-12-09 05:20:24.612987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.066 [2024-12-09 05:20:24.613431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-12-09 05:20:24.613451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.066 [2024-12-09 05:20:24.613458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.066 [2024-12-09 05:20:24.613621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.066 [2024-12-09 05:20:24.613783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.066 [2024-12-09 05:20:24.613791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.066 [2024-12-09 05:20:24.613798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.066 [2024-12-09 05:20:24.613804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.066 [2024-12-09 05:20:24.625843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.066 [2024-12-09 05:20:24.626280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-12-09 05:20:24.626298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.066 [2024-12-09 05:20:24.626305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.066 [2024-12-09 05:20:24.626478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.066 [2024-12-09 05:20:24.626651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.066 [2024-12-09 05:20:24.626659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.066 [2024-12-09 05:20:24.626666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.066 [2024-12-09 05:20:24.626672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.066 [2024-12-09 05:20:24.639069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.066 [2024-12-09 05:20:24.639528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.066 [2024-12-09 05:20:24.639545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.066 [2024-12-09 05:20:24.639553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.066 [2024-12-09 05:20:24.639730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.066 [2024-12-09 05:20:24.639909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.066 [2024-12-09 05:20:24.639917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.066 [2024-12-09 05:20:24.639925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.067 [2024-12-09 05:20:24.639932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.067 [2024-12-09 05:20:24.652039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.067 [2024-12-09 05:20:24.652495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.067 [2024-12-09 05:20:24.652541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.067 [2024-12-09 05:20:24.652563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.067 [2024-12-09 05:20:24.653174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.067 [2024-12-09 05:20:24.653733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.067 [2024-12-09 05:20:24.653742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.067 [2024-12-09 05:20:24.653747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.067 [2024-12-09 05:20:24.653754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.067 [2024-12-09 05:20:24.664840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.067 [2024-12-09 05:20:24.665307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.067 [2024-12-09 05:20:24.665324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.067 [2024-12-09 05:20:24.665331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.067 [2024-12-09 05:20:24.665503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.067 [2024-12-09 05:20:24.665676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.067 [2024-12-09 05:20:24.665684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.067 [2024-12-09 05:20:24.665691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.067 [2024-12-09 05:20:24.665697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.067 [2024-12-09 05:20:24.677732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.067 [2024-12-09 05:20:24.678171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.067 [2024-12-09 05:20:24.678187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.067 [2024-12-09 05:20:24.678194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.067 [2024-12-09 05:20:24.678357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.067 [2024-12-09 05:20:24.678520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.067 [2024-12-09 05:20:24.678528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.067 [2024-12-09 05:20:24.678534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.067 [2024-12-09 05:20:24.678540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.067 [2024-12-09 05:20:24.690717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.067 [2024-12-09 05:20:24.691166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.067 [2024-12-09 05:20:24.691183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.067 [2024-12-09 05:20:24.691190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.067 [2024-12-09 05:20:24.691715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.067 [2024-12-09 05:20:24.692007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.067 [2024-12-09 05:20:24.692018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.067 [2024-12-09 05:20:24.692040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.067 [2024-12-09 05:20:24.692047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.067 [2024-12-09 05:20:24.703838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.067 [2024-12-09 05:20:24.704298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.067 [2024-12-09 05:20:24.704316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.067 [2024-12-09 05:20:24.704323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.067 [2024-12-09 05:20:24.704501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.067 [2024-12-09 05:20:24.704678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.067 [2024-12-09 05:20:24.704687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.067 [2024-12-09 05:20:24.704693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.067 [2024-12-09 05:20:24.704700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.327 [2024-12-09 05:20:24.716913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.327 [2024-12-09 05:20:24.717379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-12-09 05:20:24.717396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.327 [2024-12-09 05:20:24.717403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.327 [2024-12-09 05:20:24.717576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.327 [2024-12-09 05:20:24.717748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.327 [2024-12-09 05:20:24.717756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.327 [2024-12-09 05:20:24.717763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.327 [2024-12-09 05:20:24.717769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.327 [2024-12-09 05:20:24.729740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.327 [2024-12-09 05:20:24.730179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-12-09 05:20:24.730195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.328 [2024-12-09 05:20:24.730202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.328 [2024-12-09 05:20:24.730365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.328 [2024-12-09 05:20:24.730528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.328 [2024-12-09 05:20:24.730535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.328 [2024-12-09 05:20:24.730542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.328 [2024-12-09 05:20:24.730548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.328 [2024-12-09 05:20:24.742619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.328 [2024-12-09 05:20:24.743072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-12-09 05:20:24.743117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.328 [2024-12-09 05:20:24.743141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.328 [2024-12-09 05:20:24.743724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.328 [2024-12-09 05:20:24.743977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.328 [2024-12-09 05:20:24.743985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.328 [2024-12-09 05:20:24.743991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.328 [2024-12-09 05:20:24.744001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.328 [2024-12-09 05:20:24.755423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.328 [2024-12-09 05:20:24.755887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-12-09 05:20:24.755904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.328 [2024-12-09 05:20:24.755911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.328 [2024-12-09 05:20:24.756090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.328 [2024-12-09 05:20:24.756263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.328 [2024-12-09 05:20:24.756271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.328 [2024-12-09 05:20:24.756277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.328 [2024-12-09 05:20:24.756284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.328 [2024-12-09 05:20:24.768303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.328 [2024-12-09 05:20:24.768787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-12-09 05:20:24.768831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.328 [2024-12-09 05:20:24.768855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.328 [2024-12-09 05:20:24.769450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.328 [2024-12-09 05:20:24.770009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.328 [2024-12-09 05:20:24.770017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.328 [2024-12-09 05:20:24.770024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.328 [2024-12-09 05:20:24.770046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.328 [2024-12-09 05:20:24.781251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.328 [2024-12-09 05:20:24.781691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-12-09 05:20:24.781749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.328 [2024-12-09 05:20:24.781773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.328 [2024-12-09 05:20:24.782373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.328 [2024-12-09 05:20:24.782961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.328 [2024-12-09 05:20:24.782986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.328 [2024-12-09 05:20:24.782993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.328 [2024-12-09 05:20:24.783004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.328 [2024-12-09 05:20:24.794137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.328 [2024-12-09 05:20:24.794591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-12-09 05:20:24.794607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.328 [2024-12-09 05:20:24.794614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.328 [2024-12-09 05:20:24.794778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.328 [2024-12-09 05:20:24.794942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.328 [2024-12-09 05:20:24.794949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.328 [2024-12-09 05:20:24.794956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.328 [2024-12-09 05:20:24.794961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.328 [2024-12-09 05:20:24.806987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.328 [2024-12-09 05:20:24.807347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-12-09 05:20:24.807363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.328 [2024-12-09 05:20:24.807370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.328 [2024-12-09 05:20:24.807534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.328 [2024-12-09 05:20:24.807697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.328 [2024-12-09 05:20:24.807705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.328 [2024-12-09 05:20:24.807711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.328 [2024-12-09 05:20:24.807717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.328 [2024-12-09 05:20:24.819911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.328 [2024-12-09 05:20:24.820307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-12-09 05:20:24.820324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.328 [2024-12-09 05:20:24.820332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.328 [2024-12-09 05:20:24.820508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.328 [2024-12-09 05:20:24.820681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.328 [2024-12-09 05:20:24.820689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.328 [2024-12-09 05:20:24.820695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.328 [2024-12-09 05:20:24.820701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.328 [2024-12-09 05:20:24.832823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.328 [2024-12-09 05:20:24.833185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-12-09 05:20:24.833201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.328 [2024-12-09 05:20:24.833209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.328 [2024-12-09 05:20:24.833381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.328 [2024-12-09 05:20:24.833553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.328 [2024-12-09 05:20:24.833562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.328 [2024-12-09 05:20:24.833568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.328 [2024-12-09 05:20:24.833574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.328 [2024-12-09 05:20:24.845758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.328 [2024-12-09 05:20:24.846232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-12-09 05:20:24.846277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.328 [2024-12-09 05:20:24.846300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.328 [2024-12-09 05:20:24.846883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.328 [2024-12-09 05:20:24.847126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.328 [2024-12-09 05:20:24.847135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.328 [2024-12-09 05:20:24.847141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.328 [2024-12-09 05:20:24.847147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.329 [2024-12-09 05:20:24.858692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.329 [2024-12-09 05:20:24.859167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-12-09 05:20:24.859182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.329 [2024-12-09 05:20:24.859189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.329 [2024-12-09 05:20:24.859352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.329 [2024-12-09 05:20:24.859516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.329 [2024-12-09 05:20:24.859526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.329 [2024-12-09 05:20:24.859533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.329 [2024-12-09 05:20:24.859539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.329 [2024-12-09 05:20:24.871604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.329 [2024-12-09 05:20:24.872043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-12-09 05:20:24.872059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.329 [2024-12-09 05:20:24.872067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.329 [2024-12-09 05:20:24.872253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.329 [2024-12-09 05:20:24.872426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.329 [2024-12-09 05:20:24.872434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.329 [2024-12-09 05:20:24.872441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.329 [2024-12-09 05:20:24.872447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.329 [2024-12-09 05:20:24.884445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.329 [2024-12-09 05:20:24.884906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-12-09 05:20:24.884923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.329 [2024-12-09 05:20:24.884931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.329 [2024-12-09 05:20:24.885128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.329 [2024-12-09 05:20:24.885307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.329 [2024-12-09 05:20:24.885315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.329 [2024-12-09 05:20:24.885322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.329 [2024-12-09 05:20:24.885329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.329 [2024-12-09 05:20:24.897554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.329 [2024-12-09 05:20:24.898012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-12-09 05:20:24.898031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.329 [2024-12-09 05:20:24.898040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.329 [2024-12-09 05:20:24.898218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.329 [2024-12-09 05:20:24.898397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.329 [2024-12-09 05:20:24.898405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.329 [2024-12-09 05:20:24.898412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.329 [2024-12-09 05:20:24.898418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.329 [2024-12-09 05:20:24.910401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.329 [2024-12-09 05:20:24.910875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-12-09 05:20:24.910919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.329 [2024-12-09 05:20:24.910942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.329 [2024-12-09 05:20:24.911416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.329 [2024-12-09 05:20:24.911591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.329 [2024-12-09 05:20:24.911600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.329 [2024-12-09 05:20:24.911606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.329 [2024-12-09 05:20:24.911613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.329 [2024-12-09 05:20:24.923229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.329 [2024-12-09 05:20:24.923680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-12-09 05:20:24.923723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.329 [2024-12-09 05:20:24.923746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.329 [2024-12-09 05:20:24.924343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.329 [2024-12-09 05:20:24.924869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.329 [2024-12-09 05:20:24.924877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.329 [2024-12-09 05:20:24.924883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.329 [2024-12-09 05:20:24.924890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.329 [2024-12-09 05:20:24.936229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.329 [2024-12-09 05:20:24.936677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-12-09 05:20:24.936693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.329 [2024-12-09 05:20:24.936701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.329 [2024-12-09 05:20:24.936873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.329 [2024-12-09 05:20:24.937050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.329 [2024-12-09 05:20:24.937058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.329 [2024-12-09 05:20:24.937064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.329 [2024-12-09 05:20:24.937071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.329 [2024-12-09 05:20:24.949057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.329 [2024-12-09 05:20:24.949500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-12-09 05:20:24.949556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.329 [2024-12-09 05:20:24.949580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.329 [2024-12-09 05:20:24.950225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.329 [2024-12-09 05:20:24.950427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.329 [2024-12-09 05:20:24.950435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.329 [2024-12-09 05:20:24.950442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.329 [2024-12-09 05:20:24.950448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.329 [2024-12-09 05:20:24.961882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.329 [2024-12-09 05:20:24.962238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-12-09 05:20:24.962256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.329 [2024-12-09 05:20:24.962263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.329 [2024-12-09 05:20:24.962435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.329 [2024-12-09 05:20:24.962608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.329 [2024-12-09 05:20:24.962616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.329 [2024-12-09 05:20:24.962622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.329 [2024-12-09 05:20:24.962628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.599 [2024-12-09 05:20:24.974893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.599 [2024-12-09 05:20:24.975367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.599 [2024-12-09 05:20:24.975415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.599 [2024-12-09 05:20:24.975439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.599 [2024-12-09 05:20:24.975967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.599 [2024-12-09 05:20:24.976151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.599 [2024-12-09 05:20:24.976160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.599 [2024-12-09 05:20:24.976166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.599 [2024-12-09 05:20:24.976173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.599 [2024-12-09 05:20:24.987817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.599 [2024-12-09 05:20:24.988236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.599 [2024-12-09 05:20:24.988253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.599 [2024-12-09 05:20:24.988260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.599 [2024-12-09 05:20:24.988435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.599 [2024-12-09 05:20:24.988608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.599 [2024-12-09 05:20:24.988617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.599 [2024-12-09 05:20:24.988623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.599 [2024-12-09 05:20:24.988629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.599 [2024-12-09 05:20:25.000745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.599 [2024-12-09 05:20:25.001233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.599 [2024-12-09 05:20:25.001279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.599 [2024-12-09 05:20:25.001302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.599 [2024-12-09 05:20:25.001714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.599 [2024-12-09 05:20:25.001886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.599 [2024-12-09 05:20:25.001895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.599 [2024-12-09 05:20:25.001901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.599 [2024-12-09 05:20:25.001907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.599 [2024-12-09 05:20:25.013622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.599 [2024-12-09 05:20:25.014072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.599 [2024-12-09 05:20:25.014119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.599 [2024-12-09 05:20:25.014143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.599 [2024-12-09 05:20:25.014580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.599 [2024-12-09 05:20:25.014744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.599 [2024-12-09 05:20:25.014752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.599 [2024-12-09 05:20:25.014758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.599 [2024-12-09 05:20:25.014764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.599 [2024-12-09 05:20:25.026554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.599 [2024-12-09 05:20:25.026884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.599 [2024-12-09 05:20:25.026900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.599 [2024-12-09 05:20:25.026907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.599 [2024-12-09 05:20:25.027094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.599 [2024-12-09 05:20:25.027267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.599 [2024-12-09 05:20:25.027276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.599 [2024-12-09 05:20:25.027288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.599 [2024-12-09 05:20:25.027295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.599 [2024-12-09 05:20:25.039492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.599 [2024-12-09 05:20:25.039952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.599 [2024-12-09 05:20:25.039968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.599 [2024-12-09 05:20:25.039975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.599 [2024-12-09 05:20:25.040154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.599 [2024-12-09 05:20:25.040327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.600 [2024-12-09 05:20:25.040335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.600 [2024-12-09 05:20:25.040342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.600 [2024-12-09 05:20:25.040348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.600 [2024-12-09 05:20:25.052449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.600 [2024-12-09 05:20:25.052821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.600 [2024-12-09 05:20:25.052838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.600 [2024-12-09 05:20:25.052845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.600 [2024-12-09 05:20:25.053023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.600 [2024-12-09 05:20:25.053196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.600 [2024-12-09 05:20:25.053204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.600 [2024-12-09 05:20:25.053211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.600 [2024-12-09 05:20:25.053217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.600 [2024-12-09 05:20:25.065433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.600 [2024-12-09 05:20:25.065902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.600 [2024-12-09 05:20:25.065947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.600 [2024-12-09 05:20:25.065969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.600 [2024-12-09 05:20:25.066494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.600 [2024-12-09 05:20:25.066668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.600 [2024-12-09 05:20:25.066676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.600 [2024-12-09 05:20:25.066683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.600 [2024-12-09 05:20:25.066689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.600 [2024-12-09 05:20:25.078543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.600 [2024-12-09 05:20:25.078923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.600 [2024-12-09 05:20:25.078940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.600 [2024-12-09 05:20:25.078947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.600 [2024-12-09 05:20:25.079124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.600 [2024-12-09 05:20:25.079298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.600 [2024-12-09 05:20:25.079306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.600 [2024-12-09 05:20:25.079313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.600 [2024-12-09 05:20:25.079319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.600 [2024-12-09 05:20:25.091505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.600 [2024-12-09 05:20:25.091946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.600 [2024-12-09 05:20:25.091963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.600 [2024-12-09 05:20:25.091971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.600 [2024-12-09 05:20:25.092156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.600 [2024-12-09 05:20:25.092341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.600 [2024-12-09 05:20:25.092350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.600 [2024-12-09 05:20:25.092356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.600 [2024-12-09 05:20:25.092363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.600 [2024-12-09 05:20:25.104590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.600 [2024-12-09 05:20:25.104955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.600 [2024-12-09 05:20:25.104971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.600 [2024-12-09 05:20:25.104978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.600 [2024-12-09 05:20:25.105175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.600 [2024-12-09 05:20:25.105353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.600 [2024-12-09 05:20:25.105361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.600 [2024-12-09 05:20:25.105368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.600 [2024-12-09 05:20:25.105375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.600 [2024-12-09 05:20:25.117515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.600 [2024-12-09 05:20:25.117949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.600 [2024-12-09 05:20:25.117969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.600 [2024-12-09 05:20:25.117977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.600 [2024-12-09 05:20:25.118169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.600 [2024-12-09 05:20:25.118342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.600 [2024-12-09 05:20:25.118350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.600 [2024-12-09 05:20:25.118357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.601 [2024-12-09 05:20:25.118363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.601 [2024-12-09 05:20:25.130449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.601 [2024-12-09 05:20:25.130880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.601 [2024-12-09 05:20:25.130924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.601 [2024-12-09 05:20:25.130947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.601 [2024-12-09 05:20:25.131548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.601 [2024-12-09 05:20:25.132048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.601 [2024-12-09 05:20:25.132057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.601 [2024-12-09 05:20:25.132063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.601 [2024-12-09 05:20:25.132069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.601 [2024-12-09 05:20:25.143370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.601 [2024-12-09 05:20:25.143803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.601 [2024-12-09 05:20:25.143820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.601 [2024-12-09 05:20:25.143828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.601 [2024-12-09 05:20:25.144005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.601 [2024-12-09 05:20:25.144199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.601 [2024-12-09 05:20:25.144207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.601 [2024-12-09 05:20:25.144214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.601 [2024-12-09 05:20:25.144221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.601 [2024-12-09 05:20:25.156567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.601 [2024-12-09 05:20:25.157010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.601 [2024-12-09 05:20:25.157027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.601 [2024-12-09 05:20:25.157035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.601 [2024-12-09 05:20:25.157219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.601 [2024-12-09 05:20:25.157396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.601 [2024-12-09 05:20:25.157406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.601 [2024-12-09 05:20:25.157414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.601 [2024-12-09 05:20:25.157422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.601 [2024-12-09 05:20:25.169585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.601 [2024-12-09 05:20:25.170039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.601 [2024-12-09 05:20:25.170094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.601 [2024-12-09 05:20:25.170117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.601 [2024-12-09 05:20:25.170702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.601 [2024-12-09 05:20:25.171298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.601 [2024-12-09 05:20:25.171326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.601 [2024-12-09 05:20:25.171357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.601 [2024-12-09 05:20:25.171364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.601 [2024-12-09 05:20:25.182598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.601 [2024-12-09 05:20:25.183020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.601 [2024-12-09 05:20:25.183037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.601 [2024-12-09 05:20:25.183045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.601 [2024-12-09 05:20:25.183217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.601 [2024-12-09 05:20:25.183389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.601 [2024-12-09 05:20:25.183397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.601 [2024-12-09 05:20:25.183403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.601 [2024-12-09 05:20:25.183410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.601 [2024-12-09 05:20:25.195511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.601 [2024-12-09 05:20:25.195845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.601 [2024-12-09 05:20:25.195861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.601 [2024-12-09 05:20:25.195868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.601 [2024-12-09 05:20:25.196055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.601 [2024-12-09 05:20:25.196244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.601 [2024-12-09 05:20:25.196252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.601 [2024-12-09 05:20:25.196262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.601 [2024-12-09 05:20:25.196269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.601 [2024-12-09 05:20:25.208523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.601 [2024-12-09 05:20:25.208965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.601 [2024-12-09 05:20:25.209016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.601 [2024-12-09 05:20:25.209043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.601 [2024-12-09 05:20:25.209627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.602 [2024-12-09 05:20:25.210222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.602 [2024-12-09 05:20:25.210260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.602 [2024-12-09 05:20:25.210267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.602 [2024-12-09 05:20:25.210274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.602 [2024-12-09 05:20:25.221365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.602 [2024-12-09 05:20:25.221802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.602 [2024-12-09 05:20:25.221820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.602 [2024-12-09 05:20:25.221827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.602 [2024-12-09 05:20:25.222006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.602 [2024-12-09 05:20:25.222180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.602 [2024-12-09 05:20:25.222188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.602 [2024-12-09 05:20:25.222194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.602 [2024-12-09 05:20:25.222201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.602 [2024-12-09 05:20:25.234370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.602 [2024-12-09 05:20:25.234772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.602 [2024-12-09 05:20:25.234789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.602 [2024-12-09 05:20:25.234796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.602 [2024-12-09 05:20:25.234969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.602 [2024-12-09 05:20:25.235167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.602 [2024-12-09 05:20:25.235176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.602 [2024-12-09 05:20:25.235183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.602 [2024-12-09 05:20:25.235189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.866 [2024-12-09 05:20:25.247336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.866 [2024-12-09 05:20:25.247689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-12-09 05:20:25.247706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.866 [2024-12-09 05:20:25.247713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.866 [2024-12-09 05:20:25.247891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.866 [2024-12-09 05:20:25.248075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.866 [2024-12-09 05:20:25.248084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.866 [2024-12-09 05:20:25.248091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.866 [2024-12-09 05:20:25.248097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.866 [2024-12-09 05:20:25.260304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.866 [2024-12-09 05:20:25.260748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-12-09 05:20:25.260764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.866 [2024-12-09 05:20:25.260772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.866 [2024-12-09 05:20:25.260945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.866 [2024-12-09 05:20:25.261124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.866 [2024-12-09 05:20:25.261133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.866 [2024-12-09 05:20:25.261139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.866 [2024-12-09 05:20:25.261145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.866 [2024-12-09 05:20:25.273159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.866 [2024-12-09 05:20:25.273588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-12-09 05:20:25.273604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.866 [2024-12-09 05:20:25.273611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.866 [2024-12-09 05:20:25.273784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.866 [2024-12-09 05:20:25.273957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.866 [2024-12-09 05:20:25.273965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.866 [2024-12-09 05:20:25.273971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.866 [2024-12-09 05:20:25.273978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.866 [2024-12-09 05:20:25.286000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.866 [2024-12-09 05:20:25.286437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-12-09 05:20:25.286456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.866 [2024-12-09 05:20:25.286464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.866 [2024-12-09 05:20:25.286636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.866 [2024-12-09 05:20:25.286808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.866 [2024-12-09 05:20:25.286816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.866 [2024-12-09 05:20:25.286823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.866 [2024-12-09 05:20:25.286829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.866 [2024-12-09 05:20:25.298872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.866 [2024-12-09 05:20:25.299329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-12-09 05:20:25.299375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.866 [2024-12-09 05:20:25.299398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.866 [2024-12-09 05:20:25.299980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.866 [2024-12-09 05:20:25.300237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.866 [2024-12-09 05:20:25.300245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.866 [2024-12-09 05:20:25.300251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.866 [2024-12-09 05:20:25.300258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.866 [2024-12-09 05:20:25.311896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.866 [2024-12-09 05:20:25.312337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-12-09 05:20:25.312354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.866 [2024-12-09 05:20:25.312361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.866 [2024-12-09 05:20:25.312533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.866 [2024-12-09 05:20:25.312706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.866 [2024-12-09 05:20:25.312714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.866 [2024-12-09 05:20:25.312720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.866 [2024-12-09 05:20:25.312726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.866 [2024-12-09 05:20:25.324751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.866 [2024-12-09 05:20:25.325190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-12-09 05:20:25.325208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.866 [2024-12-09 05:20:25.325215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.866 [2024-12-09 05:20:25.325391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.866 [2024-12-09 05:20:25.325565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.866 [2024-12-09 05:20:25.325574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.866 [2024-12-09 05:20:25.325580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.866 [2024-12-09 05:20:25.325586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.866 6733.50 IOPS, 26.30 MiB/s [2024-12-09T04:20:25.512Z] [2024-12-09 05:20:25.337678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.866 [2024-12-09 05:20:25.338064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-12-09 05:20:25.338081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.866 [2024-12-09 05:20:25.338088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.866 [2024-12-09 05:20:25.338266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.866 [2024-12-09 05:20:25.338429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.866 [2024-12-09 05:20:25.338437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.866 [2024-12-09 05:20:25.338443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.866 [2024-12-09 05:20:25.338449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.866 [2024-12-09 05:20:25.350519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.866 [2024-12-09 05:20:25.350954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-12-09 05:20:25.350971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.866 [2024-12-09 05:20:25.350978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.866 [2024-12-09 05:20:25.351157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.866 [2024-12-09 05:20:25.351330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.867 [2024-12-09 05:20:25.351338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.867 [2024-12-09 05:20:25.351344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.867 [2024-12-09 05:20:25.351351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.867 [2024-12-09 05:20:25.363355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.867 [2024-12-09 05:20:25.363790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-12-09 05:20:25.363806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.867 [2024-12-09 05:20:25.363814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.867 [2024-12-09 05:20:25.363986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.867 [2024-12-09 05:20:25.364164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.867 [2024-12-09 05:20:25.364175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.867 [2024-12-09 05:20:25.364182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.867 [2024-12-09 05:20:25.364188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.867 [2024-12-09 05:20:25.376208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.867 [2024-12-09 05:20:25.376628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-12-09 05:20:25.376672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.867 [2024-12-09 05:20:25.376696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.867 [2024-12-09 05:20:25.377213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.867 [2024-12-09 05:20:25.377469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.867 [2024-12-09 05:20:25.377480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.867 [2024-12-09 05:20:25.377490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.867 [2024-12-09 05:20:25.377499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.867 [2024-12-09 05:20:25.389599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.867 [2024-12-09 05:20:25.390037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-12-09 05:20:25.390054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.867 [2024-12-09 05:20:25.390061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.867 [2024-12-09 05:20:25.390235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.867 [2024-12-09 05:20:25.390407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.867 [2024-12-09 05:20:25.390415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.867 [2024-12-09 05:20:25.390422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.867 [2024-12-09 05:20:25.390428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.867 [2024-12-09 05:20:25.402432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.867 [2024-12-09 05:20:25.402881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-12-09 05:20:25.402897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.867 [2024-12-09 05:20:25.402905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.867 [2024-12-09 05:20:25.403102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.867 [2024-12-09 05:20:25.403280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.867 [2024-12-09 05:20:25.403290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.867 [2024-12-09 05:20:25.403298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.867 [2024-12-09 05:20:25.403305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.867 [2024-12-09 05:20:25.415534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.867 [2024-12-09 05:20:25.415982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-12-09 05:20:25.416005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.867 [2024-12-09 05:20:25.416013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.867 [2024-12-09 05:20:25.416192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.867 [2024-12-09 05:20:25.416370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.867 [2024-12-09 05:20:25.416378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.867 [2024-12-09 05:20:25.416385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.867 [2024-12-09 05:20:25.416391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.867 [2024-12-09 05:20:25.428525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.867 [2024-12-09 05:20:25.428946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-12-09 05:20:25.428962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.867 [2024-12-09 05:20:25.428969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.867 [2024-12-09 05:20:25.429169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.867 [2024-12-09 05:20:25.429348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.867 [2024-12-09 05:20:25.429356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.867 [2024-12-09 05:20:25.429363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.867 [2024-12-09 05:20:25.429369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.867 [2024-12-09 05:20:25.441368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.867 [2024-12-09 05:20:25.441815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-12-09 05:20:25.441860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.867 [2024-12-09 05:20:25.441882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.867 [2024-12-09 05:20:25.442471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.867 [2024-12-09 05:20:25.442651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.867 [2024-12-09 05:20:25.442659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.867 [2024-12-09 05:20:25.442666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.867 [2024-12-09 05:20:25.442673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.867 [2024-12-09 05:20:25.454260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.867 [2024-12-09 05:20:25.454670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-12-09 05:20:25.454690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.867 [2024-12-09 05:20:25.454697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.867 [2024-12-09 05:20:25.454860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.867 [2024-12-09 05:20:25.455046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.867 [2024-12-09 05:20:25.455055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.867 [2024-12-09 05:20:25.455061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.867 [2024-12-09 05:20:25.455068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.867 [2024-12-09 05:20:25.467160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.867 [2024-12-09 05:20:25.467609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-12-09 05:20:25.467625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.867 [2024-12-09 05:20:25.467633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.867 [2024-12-09 05:20:25.467811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.867 [2024-12-09 05:20:25.467988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.867 [2024-12-09 05:20:25.468003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.867 [2024-12-09 05:20:25.468010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.867 [2024-12-09 05:20:25.468017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.867 [2024-12-09 05:20:25.480014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.867 [2024-12-09 05:20:25.480394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-12-09 05:20:25.480439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.867 [2024-12-09 05:20:25.480463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.867 [2024-12-09 05:20:25.480951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.867 [2024-12-09 05:20:25.481130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.868 [2024-12-09 05:20:25.481138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.868 [2024-12-09 05:20:25.481145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.868 [2024-12-09 05:20:25.481151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.868 [2024-12-09 05:20:25.492921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.868 [2024-12-09 05:20:25.493345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-12-09 05:20:25.493362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.868 [2024-12-09 05:20:25.493370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.868 [2024-12-09 05:20:25.493546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.868 [2024-12-09 05:20:25.493720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.868 [2024-12-09 05:20:25.493728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.868 [2024-12-09 05:20:25.493734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.868 [2024-12-09 05:20:25.493740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.868 [2024-12-09 05:20:25.505970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.868 [2024-12-09 05:20:25.506379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-12-09 05:20:25.506397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:48.868 [2024-12-09 05:20:25.506404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:48.868 [2024-12-09 05:20:25.506582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:48.868 [2024-12-09 05:20:25.506760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.868 [2024-12-09 05:20:25.506768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.868 [2024-12-09 05:20:25.506775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.868 [2024-12-09 05:20:25.506781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.128 [2024-12-09 05:20:25.519004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.128 [2024-12-09 05:20:25.519449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.128 [2024-12-09 05:20:25.519496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.128 [2024-12-09 05:20:25.519521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.128 [2024-12-09 05:20:25.520020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.128 [2024-12-09 05:20:25.520194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.128 [2024-12-09 05:20:25.520203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.128 [2024-12-09 05:20:25.520209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.128 [2024-12-09 05:20:25.520216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.128 [2024-12-09 05:20:25.531982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.128 [2024-12-09 05:20:25.532413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.128 [2024-12-09 05:20:25.532430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.128 [2024-12-09 05:20:25.532438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.128 [2024-12-09 05:20:25.532610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.128 [2024-12-09 05:20:25.532783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.128 [2024-12-09 05:20:25.532795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.128 [2024-12-09 05:20:25.532801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.128 [2024-12-09 05:20:25.532807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.128 [2024-12-09 05:20:25.545011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.128 [2024-12-09 05:20:25.545451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.128 [2024-12-09 05:20:25.545468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.128 [2024-12-09 05:20:25.545475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.128 [2024-12-09 05:20:25.545647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.128 [2024-12-09 05:20:25.545821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.128 [2024-12-09 05:20:25.545830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.128 [2024-12-09 05:20:25.545837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.128 [2024-12-09 05:20:25.545843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.128 [2024-12-09 05:20:25.557935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.128 [2024-12-09 05:20:25.558357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.128 [2024-12-09 05:20:25.558375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.128 [2024-12-09 05:20:25.558382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.128 [2024-12-09 05:20:25.558555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.128 [2024-12-09 05:20:25.558728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.128 [2024-12-09 05:20:25.558737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.128 [2024-12-09 05:20:25.558744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.128 [2024-12-09 05:20:25.558750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.128 [2024-12-09 05:20:25.570827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.128 [2024-12-09 05:20:25.571145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.128 [2024-12-09 05:20:25.571163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.128 [2024-12-09 05:20:25.571170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.128 [2024-12-09 05:20:25.571343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.128 [2024-12-09 05:20:25.571516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.128 [2024-12-09 05:20:25.571524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.128 [2024-12-09 05:20:25.571530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.128 [2024-12-09 05:20:25.571537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.129 [2024-12-09 05:20:25.583779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.129 [2024-12-09 05:20:25.584142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.129 [2024-12-09 05:20:25.584161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.129 [2024-12-09 05:20:25.584168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.129 [2024-12-09 05:20:25.584341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.129 [2024-12-09 05:20:25.584515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.129 [2024-12-09 05:20:25.584525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.129 [2024-12-09 05:20:25.584531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.129 [2024-12-09 05:20:25.584537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.129 [2024-12-09 05:20:25.596871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.129 [2024-12-09 05:20:25.597301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.129 [2024-12-09 05:20:25.597318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.129 [2024-12-09 05:20:25.597326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.129 [2024-12-09 05:20:25.597503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.129 [2024-12-09 05:20:25.597682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.129 [2024-12-09 05:20:25.597690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.129 [2024-12-09 05:20:25.597697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.129 [2024-12-09 05:20:25.597704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.129 [2024-12-09 05:20:25.609859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.129 [2024-12-09 05:20:25.610239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.129 [2024-12-09 05:20:25.610256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.129 [2024-12-09 05:20:25.610263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.129 [2024-12-09 05:20:25.610436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.129 [2024-12-09 05:20:25.610608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.129 [2024-12-09 05:20:25.610616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.129 [2024-12-09 05:20:25.610623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.129 [2024-12-09 05:20:25.610629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.129 [2024-12-09 05:20:25.622891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.129 [2024-12-09 05:20:25.623274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.129 [2024-12-09 05:20:25.623294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.129 [2024-12-09 05:20:25.623302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.129 [2024-12-09 05:20:25.623475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.129 [2024-12-09 05:20:25.623653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.129 [2024-12-09 05:20:25.623661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.129 [2024-12-09 05:20:25.623667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.129 [2024-12-09 05:20:25.623674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.129 [2024-12-09 05:20:25.635907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.129 [2024-12-09 05:20:25.636305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.129 [2024-12-09 05:20:25.636349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.129 [2024-12-09 05:20:25.636373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.129 [2024-12-09 05:20:25.636887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.129 [2024-12-09 05:20:25.637068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.129 [2024-12-09 05:20:25.637077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.129 [2024-12-09 05:20:25.637083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.129 [2024-12-09 05:20:25.637090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.129 [2024-12-09 05:20:25.648856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.129 [2024-12-09 05:20:25.649202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.129 [2024-12-09 05:20:25.649248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.129 [2024-12-09 05:20:25.649270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.129 [2024-12-09 05:20:25.649853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.129 [2024-12-09 05:20:25.650143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.129 [2024-12-09 05:20:25.650151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.129 [2024-12-09 05:20:25.650158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.129 [2024-12-09 05:20:25.650164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.129 [2024-12-09 05:20:25.661837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.129 [2024-12-09 05:20:25.662176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.129 [2024-12-09 05:20:25.662193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.129 [2024-12-09 05:20:25.662201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.129 [2024-12-09 05:20:25.662385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.129 [2024-12-09 05:20:25.662566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.129 [2024-12-09 05:20:25.662574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.129 [2024-12-09 05:20:25.662581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.129 [2024-12-09 05:20:25.662588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.129 [2024-12-09 05:20:25.674979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.129 [2024-12-09 05:20:25.675413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.129 [2024-12-09 05:20:25.675430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.129 [2024-12-09 05:20:25.675437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.129 [2024-12-09 05:20:25.675615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.129 [2024-12-09 05:20:25.675794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.129 [2024-12-09 05:20:25.675803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.129 [2024-12-09 05:20:25.675810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.129 [2024-12-09 05:20:25.675817] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.129 [2024-12-09 05:20:25.687980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.129 [2024-12-09 05:20:25.688358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.129 [2024-12-09 05:20:25.688376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.129 [2024-12-09 05:20:25.688383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.129 [2024-12-09 05:20:25.688561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.129 [2024-12-09 05:20:25.688740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.129 [2024-12-09 05:20:25.688748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.129 [2024-12-09 05:20:25.688755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.129 [2024-12-09 05:20:25.688761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.129 [2024-12-09 05:20:25.700940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.129 [2024-12-09 05:20:25.701330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.129 [2024-12-09 05:20:25.701376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.129 [2024-12-09 05:20:25.701399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.129 [2024-12-09 05:20:25.701981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.129 [2024-12-09 05:20:25.702518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.129 [2024-12-09 05:20:25.702532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.129 [2024-12-09 05:20:25.702539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.130 [2024-12-09 05:20:25.702545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.130 [2024-12-09 05:20:25.713892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.130 [2024-12-09 05:20:25.714337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.130 [2024-12-09 05:20:25.714354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.130 [2024-12-09 05:20:25.714362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.130 [2024-12-09 05:20:25.714534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.130 [2024-12-09 05:20:25.714706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.130 [2024-12-09 05:20:25.714715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.130 [2024-12-09 05:20:25.714721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.130 [2024-12-09 05:20:25.714727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.130 [2024-12-09 05:20:25.727003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.130 [2024-12-09 05:20:25.727405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.130 [2024-12-09 05:20:25.727448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.130 [2024-12-09 05:20:25.727470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.130 [2024-12-09 05:20:25.728065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.130 [2024-12-09 05:20:25.728255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.130 [2024-12-09 05:20:25.728263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.130 [2024-12-09 05:20:25.728269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.130 [2024-12-09 05:20:25.728276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.130 [2024-12-09 05:20:25.739974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.130 [2024-12-09 05:20:25.740462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.130 [2024-12-09 05:20:25.740508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.130 [2024-12-09 05:20:25.740531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.130 [2024-12-09 05:20:25.741127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.130 [2024-12-09 05:20:25.741319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.130 [2024-12-09 05:20:25.741327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.130 [2024-12-09 05:20:25.741334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.130 [2024-12-09 05:20:25.741340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.130 [2024-12-09 05:20:25.752884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.130 [2024-12-09 05:20:25.753277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.130 [2024-12-09 05:20:25.753294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.130 [2024-12-09 05:20:25.753302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.130 [2024-12-09 05:20:25.753474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.130 [2024-12-09 05:20:25.753646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.130 [2024-12-09 05:20:25.753654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.130 [2024-12-09 05:20:25.753661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.130 [2024-12-09 05:20:25.753667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.130 [2024-12-09 05:20:25.765870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.130 [2024-12-09 05:20:25.766195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.130 [2024-12-09 05:20:25.766213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.130 [2024-12-09 05:20:25.766220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.130 [2024-12-09 05:20:25.766397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.130 [2024-12-09 05:20:25.766575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.130 [2024-12-09 05:20:25.766583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.130 [2024-12-09 05:20:25.766589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.130 [2024-12-09 05:20:25.766595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.389 [2024-12-09 05:20:25.778938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.389 [2024-12-09 05:20:25.779399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.389 [2024-12-09 05:20:25.779444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.389 [2024-12-09 05:20:25.779467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.390 [2024-12-09 05:20:25.780063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.390 [2024-12-09 05:20:25.780528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.390 [2024-12-09 05:20:25.780536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.390 [2024-12-09 05:20:25.780543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.390 [2024-12-09 05:20:25.780549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.390 [2024-12-09 05:20:25.791961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.390 [2024-12-09 05:20:25.792293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.390 [2024-12-09 05:20:25.792314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.390 [2024-12-09 05:20:25.792322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.390 [2024-12-09 05:20:25.792495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.390 [2024-12-09 05:20:25.792668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.390 [2024-12-09 05:20:25.792676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.390 [2024-12-09 05:20:25.792682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.390 [2024-12-09 05:20:25.792689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.390 [2024-12-09 05:20:25.804930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.390 [2024-12-09 05:20:25.805263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.390 [2024-12-09 05:20:25.805280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.390 [2024-12-09 05:20:25.805288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.390 [2024-12-09 05:20:25.805460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.390 [2024-12-09 05:20:25.805632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.390 [2024-12-09 05:20:25.805640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.390 [2024-12-09 05:20:25.805646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.390 [2024-12-09 05:20:25.805653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.390 [2024-12-09 05:20:25.817855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.390 [2024-12-09 05:20:25.818237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.390 [2024-12-09 05:20:25.818254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.390 [2024-12-09 05:20:25.818262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.390 [2024-12-09 05:20:25.818434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.390 [2024-12-09 05:20:25.818607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.390 [2024-12-09 05:20:25.818616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.390 [2024-12-09 05:20:25.818622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.390 [2024-12-09 05:20:25.818628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.390 [2024-12-09 05:20:25.830874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.390 [2024-12-09 05:20:25.831244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.390 [2024-12-09 05:20:25.831261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.390 [2024-12-09 05:20:25.831268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.390 [2024-12-09 05:20:25.831444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.390 [2024-12-09 05:20:25.831618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.390 [2024-12-09 05:20:25.831626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.390 [2024-12-09 05:20:25.831633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.390 [2024-12-09 05:20:25.831639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.390 [2024-12-09 05:20:25.843939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.390 [2024-12-09 05:20:25.844298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.390 [2024-12-09 05:20:25.844315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.390 [2024-12-09 05:20:25.844322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.390 [2024-12-09 05:20:25.844495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.390 [2024-12-09 05:20:25.844667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.390 [2024-12-09 05:20:25.844676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.390 [2024-12-09 05:20:25.844682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.390 [2024-12-09 05:20:25.844689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.390 [2024-12-09 05:20:25.856787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.390 [2024-12-09 05:20:25.857170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.390 [2024-12-09 05:20:25.857187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.390 [2024-12-09 05:20:25.857194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.390 [2024-12-09 05:20:25.857367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.390 [2024-12-09 05:20:25.857542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.390 [2024-12-09 05:20:25.857550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.390 [2024-12-09 05:20:25.857557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.390 [2024-12-09 05:20:25.857563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.390 [2024-12-09 05:20:25.869779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.390 [2024-12-09 05:20:25.870198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.390 [2024-12-09 05:20:25.870215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.390 [2024-12-09 05:20:25.870222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.390 [2024-12-09 05:20:25.870394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.390 [2024-12-09 05:20:25.870567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.390 [2024-12-09 05:20:25.870579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.390 [2024-12-09 05:20:25.870585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.390 [2024-12-09 05:20:25.870591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.390 [2024-12-09 05:20:25.882817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.390 [2024-12-09 05:20:25.883146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.390 [2024-12-09 05:20:25.883164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.390 [2024-12-09 05:20:25.883171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.390 [2024-12-09 05:20:25.883343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.390 [2024-12-09 05:20:25.883517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.390 [2024-12-09 05:20:25.883525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.390 [2024-12-09 05:20:25.883532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.390 [2024-12-09 05:20:25.883538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.390 [2024-12-09 05:20:25.895792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.390 [2024-12-09 05:20:25.896235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.390 [2024-12-09 05:20:25.896285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.390 [2024-12-09 05:20:25.896309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.390 [2024-12-09 05:20:25.896805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.390 [2024-12-09 05:20:25.896978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.391 [2024-12-09 05:20:25.896986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.391 [2024-12-09 05:20:25.896993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.391 [2024-12-09 05:20:25.897006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.391 [2024-12-09 05:20:25.908728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.391 [2024-12-09 05:20:25.909154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.391 [2024-12-09 05:20:25.909171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.391 [2024-12-09 05:20:25.909179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.391 [2024-12-09 05:20:25.909351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.391 [2024-12-09 05:20:25.909524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.391 [2024-12-09 05:20:25.909532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.391 [2024-12-09 05:20:25.909538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.391 [2024-12-09 05:20:25.909544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.391 [2024-12-09 05:20:25.921716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.391 [2024-12-09 05:20:25.922122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.391 [2024-12-09 05:20:25.922139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.391 [2024-12-09 05:20:25.922147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.391 [2024-12-09 05:20:25.922325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.391 [2024-12-09 05:20:25.922504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.391 [2024-12-09 05:20:25.922513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.391 [2024-12-09 05:20:25.922520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.391 [2024-12-09 05:20:25.922527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.391 [2024-12-09 05:20:25.934938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.391 [2024-12-09 05:20:25.935327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.391 [2024-12-09 05:20:25.935344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.391 [2024-12-09 05:20:25.935352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.391 [2024-12-09 05:20:25.935532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.391 [2024-12-09 05:20:25.935711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.391 [2024-12-09 05:20:25.935720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.391 [2024-12-09 05:20:25.935726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.391 [2024-12-09 05:20:25.935733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.391 [2024-12-09 05:20:25.948021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.391 [2024-12-09 05:20:25.948403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.391 [2024-12-09 05:20:25.948420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.391 [2024-12-09 05:20:25.948427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.391 [2024-12-09 05:20:25.948600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.391 [2024-12-09 05:20:25.948772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.391 [2024-12-09 05:20:25.948781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.391 [2024-12-09 05:20:25.948787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.391 [2024-12-09 05:20:25.948793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.391 [2024-12-09 05:20:25.960909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.391 [2024-12-09 05:20:25.961263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.391 [2024-12-09 05:20:25.961283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.391 [2024-12-09 05:20:25.961290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.391 [2024-12-09 05:20:25.961463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.391 [2024-12-09 05:20:25.961636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.391 [2024-12-09 05:20:25.961644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.391 [2024-12-09 05:20:25.961650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.391 [2024-12-09 05:20:25.961657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.391 [2024-12-09 05:20:25.973869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.391 [2024-12-09 05:20:25.974297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.391 [2024-12-09 05:20:25.974314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.391 [2024-12-09 05:20:25.974321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.391 [2024-12-09 05:20:25.974494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.391 [2024-12-09 05:20:25.974667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.391 [2024-12-09 05:20:25.974675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.391 [2024-12-09 05:20:25.974681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.391 [2024-12-09 05:20:25.974688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.391 [2024-12-09 05:20:25.986874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.391 [2024-12-09 05:20:25.987311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.391 [2024-12-09 05:20:25.987328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.391 [2024-12-09 05:20:25.987336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.391 [2024-12-09 05:20:25.987508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.391 [2024-12-09 05:20:25.987681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.391 [2024-12-09 05:20:25.987690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.391 [2024-12-09 05:20:25.987696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.391 [2024-12-09 05:20:25.987702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.391 [2024-12-09 05:20:25.999795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.391 [2024-12-09 05:20:26.000234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.391 [2024-12-09 05:20:26.000251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.391 [2024-12-09 05:20:26.000259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.391 [2024-12-09 05:20:26.000435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.391 [2024-12-09 05:20:26.000609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.391 [2024-12-09 05:20:26.000617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.391 [2024-12-09 05:20:26.000623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.391 [2024-12-09 05:20:26.000629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.391 [2024-12-09 05:20:26.012713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.391 [2024-12-09 05:20:26.013163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.391 [2024-12-09 05:20:26.013210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.391 [2024-12-09 05:20:26.013233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.391 [2024-12-09 05:20:26.013818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.391 [2024-12-09 05:20:26.013992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.391 [2024-12-09 05:20:26.014005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.392 [2024-12-09 05:20:26.014012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.392 [2024-12-09 05:20:26.014018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.392 [2024-12-09 05:20:26.025585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.392 [2024-12-09 05:20:26.026023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.392 [2024-12-09 05:20:26.026041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.392 [2024-12-09 05:20:26.026048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.392 [2024-12-09 05:20:26.026220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.392 [2024-12-09 05:20:26.026392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.392 [2024-12-09 05:20:26.026400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.392 [2024-12-09 05:20:26.026407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.392 [2024-12-09 05:20:26.026413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.652 [2024-12-09 05:20:26.038739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.652 [2024-12-09 05:20:26.039190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.652 [2024-12-09 05:20:26.039237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.652 [2024-12-09 05:20:26.039260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.652 [2024-12-09 05:20:26.039845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.652 [2024-12-09 05:20:26.040118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.652 [2024-12-09 05:20:26.040130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.652 [2024-12-09 05:20:26.040136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.652 [2024-12-09 05:20:26.040143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.652 [2024-12-09 05:20:26.051617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.652 [2024-12-09 05:20:26.052058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.652 [2024-12-09 05:20:26.052098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.652 [2024-12-09 05:20:26.052123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.652 [2024-12-09 05:20:26.052706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.652 [2024-12-09 05:20:26.053254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.652 [2024-12-09 05:20:26.053263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.652 [2024-12-09 05:20:26.053270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.652 [2024-12-09 05:20:26.053277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.652 [2024-12-09 05:20:26.064521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.652 [2024-12-09 05:20:26.064893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.652 [2024-12-09 05:20:26.064939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.652 [2024-12-09 05:20:26.064962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.652 [2024-12-09 05:20:26.065438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.652 [2024-12-09 05:20:26.065612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.652 [2024-12-09 05:20:26.065620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.652 [2024-12-09 05:20:26.065626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.652 [2024-12-09 05:20:26.065633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.652 [2024-12-09 05:20:26.077339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.652 [2024-12-09 05:20:26.077788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.652 [2024-12-09 05:20:26.077832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.652 [2024-12-09 05:20:26.077855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.652 [2024-12-09 05:20:26.078263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.652 [2024-12-09 05:20:26.078438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.652 [2024-12-09 05:20:26.078446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.652 [2024-12-09 05:20:26.078452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.652 [2024-12-09 05:20:26.078459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.652 [2024-12-09 05:20:26.090360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.652 [2024-12-09 05:20:26.090749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.652 [2024-12-09 05:20:26.090766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.652 [2024-12-09 05:20:26.090773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.652 [2024-12-09 05:20:26.090945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.652 [2024-12-09 05:20:26.091123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.652 [2024-12-09 05:20:26.091132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.652 [2024-12-09 05:20:26.091138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.652 [2024-12-09 05:20:26.091145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.652 [2024-12-09 05:20:26.103410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.652 [2024-12-09 05:20:26.103865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.652 [2024-12-09 05:20:26.103909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.652 [2024-12-09 05:20:26.103934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.652 [2024-12-09 05:20:26.104466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.652 [2024-12-09 05:20:26.104640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.652 [2024-12-09 05:20:26.104648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.652 [2024-12-09 05:20:26.104655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.652 [2024-12-09 05:20:26.104661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.652 [2024-12-09 05:20:26.116399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.652 [2024-12-09 05:20:26.116816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.652 [2024-12-09 05:20:26.116833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.652 [2024-12-09 05:20:26.116840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.652 [2024-12-09 05:20:26.117017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.652 [2024-12-09 05:20:26.117191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.652 [2024-12-09 05:20:26.117199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.652 [2024-12-09 05:20:26.117206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.652 [2024-12-09 05:20:26.117212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.652 [2024-12-09 05:20:26.129403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.652 [2024-12-09 05:20:26.129845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.652 [2024-12-09 05:20:26.129898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.652 [2024-12-09 05:20:26.129922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.652 [2024-12-09 05:20:26.130410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.652 [2024-12-09 05:20:26.130585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.652 [2024-12-09 05:20:26.130593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.652 [2024-12-09 05:20:26.130599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.652 [2024-12-09 05:20:26.130605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.652 [2024-12-09 05:20:26.142371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.653 [2024-12-09 05:20:26.142763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.653 [2024-12-09 05:20:26.142781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.653 [2024-12-09 05:20:26.142788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.653 [2024-12-09 05:20:26.142961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.653 [2024-12-09 05:20:26.143140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.653 [2024-12-09 05:20:26.143148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.653 [2024-12-09 05:20:26.143155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.653 [2024-12-09 05:20:26.143161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.653 [2024-12-09 05:20:26.155337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.653 [2024-12-09 05:20:26.155819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.653 [2024-12-09 05:20:26.155863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.653 [2024-12-09 05:20:26.155886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.653 [2024-12-09 05:20:26.156392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.653 [2024-12-09 05:20:26.156566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.653 [2024-12-09 05:20:26.156574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.653 [2024-12-09 05:20:26.156581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.653 [2024-12-09 05:20:26.156588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.653 [2024-12-09 05:20:26.168206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.653 [2024-12-09 05:20:26.168652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.653 [2024-12-09 05:20:26.168698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.653 [2024-12-09 05:20:26.168722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.653 [2024-12-09 05:20:26.169213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.653 [2024-12-09 05:20:26.169388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.653 [2024-12-09 05:20:26.169397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.653 [2024-12-09 05:20:26.169404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.653 [2024-12-09 05:20:26.169410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.653 [2024-12-09 05:20:26.181222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.653 [2024-12-09 05:20:26.181615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.653 [2024-12-09 05:20:26.181632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.653 [2024-12-09 05:20:26.181639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.653 [2024-12-09 05:20:26.181817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.653 [2024-12-09 05:20:26.182006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.653 [2024-12-09 05:20:26.182015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.653 [2024-12-09 05:20:26.182022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.653 [2024-12-09 05:20:26.182029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.653 [2024-12-09 05:20:26.194300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.653 [2024-12-09 05:20:26.194777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.653 [2024-12-09 05:20:26.194794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.653 [2024-12-09 05:20:26.194802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.653 [2024-12-09 05:20:26.194980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.653 [2024-12-09 05:20:26.195166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.653 [2024-12-09 05:20:26.195175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.653 [2024-12-09 05:20:26.195182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.653 [2024-12-09 05:20:26.195188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.653 [2024-12-09 05:20:26.207409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.653 [2024-12-09 05:20:26.207856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.653 [2024-12-09 05:20:26.207872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.653 [2024-12-09 05:20:26.207879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.653 [2024-12-09 05:20:26.208058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.653 [2024-12-09 05:20:26.208232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.653 [2024-12-09 05:20:26.208243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.653 [2024-12-09 05:20:26.208250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.653 [2024-12-09 05:20:26.208256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.653 [2024-12-09 05:20:26.220507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.653 [2024-12-09 05:20:26.220976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.653 [2024-12-09 05:20:26.221033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.653 [2024-12-09 05:20:26.221057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.653 [2024-12-09 05:20:26.221594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.653 [2024-12-09 05:20:26.221768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.653 [2024-12-09 05:20:26.221776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.653 [2024-12-09 05:20:26.221783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.653 [2024-12-09 05:20:26.221790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.653 [2024-12-09 05:20:26.233499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.653 [2024-12-09 05:20:26.233917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.653 [2024-12-09 05:20:26.233934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.653 [2024-12-09 05:20:26.233941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.653 [2024-12-09 05:20:26.234123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.653 [2024-12-09 05:20:26.234296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.653 [2024-12-09 05:20:26.234305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.653 [2024-12-09 05:20:26.234311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.653 [2024-12-09 05:20:26.234317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.653 [2024-12-09 05:20:26.246498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.653 [2024-12-09 05:20:26.246967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.653 [2024-12-09 05:20:26.246984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.653 [2024-12-09 05:20:26.246991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.653 [2024-12-09 05:20:26.247173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.653 [2024-12-09 05:20:26.247347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.653 [2024-12-09 05:20:26.247355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.653 [2024-12-09 05:20:26.247362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.653 [2024-12-09 05:20:26.247368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.653 [2024-12-09 05:20:26.259431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.653 [2024-12-09 05:20:26.259867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.653 [2024-12-09 05:20:26.259883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.653 [2024-12-09 05:20:26.259890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.653 [2024-12-09 05:20:26.260079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.653 [2024-12-09 05:20:26.260252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.653 [2024-12-09 05:20:26.260260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.654 [2024-12-09 05:20:26.260267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.654 [2024-12-09 05:20:26.260273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.654 [2024-12-09 05:20:26.272375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.654 [2024-12-09 05:20:26.272824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.654 [2024-12-09 05:20:26.272876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.654 [2024-12-09 05:20:26.272899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.654 [2024-12-09 05:20:26.273500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.654 [2024-12-09 05:20:26.273688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.654 [2024-12-09 05:20:26.273696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.654 [2024-12-09 05:20:26.273703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.654 [2024-12-09 05:20:26.273710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.654 [2024-12-09 05:20:26.285320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.654 [2024-12-09 05:20:26.285681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.654 [2024-12-09 05:20:26.285697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.654 [2024-12-09 05:20:26.285704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.654 [2024-12-09 05:20:26.285867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.654 [2024-12-09 05:20:26.286068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.654 [2024-12-09 05:20:26.286076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.654 [2024-12-09 05:20:26.286083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.654 [2024-12-09 05:20:26.286090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.914 [2024-12-09 05:20:26.298388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.914 [2024-12-09 05:20:26.298843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-12-09 05:20:26.298902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.914 [2024-12-09 05:20:26.298926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.914 [2024-12-09 05:20:26.299527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.914 [2024-12-09 05:20:26.299921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.914 [2024-12-09 05:20:26.299929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.914 [2024-12-09 05:20:26.299935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.914 [2024-12-09 05:20:26.299942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.914 [2024-12-09 05:20:26.311211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.914 [2024-12-09 05:20:26.311651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-12-09 05:20:26.311668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.914 [2024-12-09 05:20:26.311674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.914 [2024-12-09 05:20:26.311837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.914 [2024-12-09 05:20:26.312006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.914 [2024-12-09 05:20:26.312014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.914 [2024-12-09 05:20:26.312020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.914 [2024-12-09 05:20:26.312043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.914 [2024-12-09 05:20:26.324065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.914 [2024-12-09 05:20:26.324508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-12-09 05:20:26.324524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.914 [2024-12-09 05:20:26.324531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.914 [2024-12-09 05:20:26.324695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.914 [2024-12-09 05:20:26.324858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.914 [2024-12-09 05:20:26.324865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.914 [2024-12-09 05:20:26.324871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.914 [2024-12-09 05:20:26.324877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.914 5386.80 IOPS, 21.04 MiB/s [2024-12-09T04:20:26.560Z] [2024-12-09 05:20:26.336928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.914 [2024-12-09 05:20:26.337316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-12-09 05:20:26.337333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.914 [2024-12-09 05:20:26.337341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.915 [2024-12-09 05:20:26.337517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.915 [2024-12-09 05:20:26.337690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.915 [2024-12-09 05:20:26.337699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.915 [2024-12-09 05:20:26.337705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.915 [2024-12-09 05:20:26.337712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.915 [2024-12-09 05:20:26.350039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.915 [2024-12-09 05:20:26.350474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-12-09 05:20:26.350519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.915 [2024-12-09 05:20:26.350542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.915 [2024-12-09 05:20:26.350969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.915 [2024-12-09 05:20:26.351150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.915 [2024-12-09 05:20:26.351159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.915 [2024-12-09 05:20:26.351165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.915 [2024-12-09 05:20:26.351171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.915 [2024-12-09 05:20:26.362883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.915 [2024-12-09 05:20:26.363342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-12-09 05:20:26.363391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.915 [2024-12-09 05:20:26.363415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.915 [2024-12-09 05:20:26.363975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.915 [2024-12-09 05:20:26.364155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.915 [2024-12-09 05:20:26.364163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.915 [2024-12-09 05:20:26.364170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.915 [2024-12-09 05:20:26.364176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.915 [2024-12-09 05:20:26.375727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.915 [2024-12-09 05:20:26.376134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-12-09 05:20:26.376150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.915 [2024-12-09 05:20:26.376157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.915 [2024-12-09 05:20:26.376321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.915 [2024-12-09 05:20:26.376484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.915 [2024-12-09 05:20:26.376494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.915 [2024-12-09 05:20:26.376500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.915 [2024-12-09 05:20:26.376506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.915 [2024-12-09 05:20:26.388559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.915 [2024-12-09 05:20:26.389024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-12-09 05:20:26.389041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.915 [2024-12-09 05:20:26.389048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.915 [2024-12-09 05:20:26.389229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.915 [2024-12-09 05:20:26.389393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.915 [2024-12-09 05:20:26.389401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.915 [2024-12-09 05:20:26.389407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.915 [2024-12-09 05:20:26.389413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.915 [2024-12-09 05:20:26.401435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.915 [2024-12-09 05:20:26.401874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-12-09 05:20:26.401891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.915 [2024-12-09 05:20:26.401898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.915 [2024-12-09 05:20:26.402076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.915 [2024-12-09 05:20:26.402249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.915 [2024-12-09 05:20:26.402257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.915 [2024-12-09 05:20:26.402263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.915 [2024-12-09 05:20:26.402269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.915 [2024-12-09 05:20:26.414297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.915 [2024-12-09 05:20:26.414669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-12-09 05:20:26.414685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.915 [2024-12-09 05:20:26.414692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.915 [2024-12-09 05:20:26.414855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.915 [2024-12-09 05:20:26.415040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.915 [2024-12-09 05:20:26.415049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.915 [2024-12-09 05:20:26.415055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.915 [2024-12-09 05:20:26.415061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.915 [2024-12-09 05:20:26.427231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.915 [2024-12-09 05:20:26.427708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-12-09 05:20:26.427753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.915 [2024-12-09 05:20:26.427775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.915 [2024-12-09 05:20:26.428165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.915 [2024-12-09 05:20:26.428339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.915 [2024-12-09 05:20:26.428347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.915 [2024-12-09 05:20:26.428354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.915 [2024-12-09 05:20:26.428360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.915 [2024-12-09 05:20:26.440198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.915 [2024-12-09 05:20:26.440651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-12-09 05:20:26.440668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.915 [2024-12-09 05:20:26.440675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.915 [2024-12-09 05:20:26.440847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.915 [2024-12-09 05:20:26.441041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.915 [2024-12-09 05:20:26.441050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.915 [2024-12-09 05:20:26.441057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.915 [2024-12-09 05:20:26.441063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.915 [2024-12-09 05:20:26.453387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.915 [2024-12-09 05:20:26.453773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-12-09 05:20:26.453790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.915 [2024-12-09 05:20:26.453797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.915 [2024-12-09 05:20:26.453974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.915 [2024-12-09 05:20:26.454158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.915 [2024-12-09 05:20:26.454167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.915 [2024-12-09 05:20:26.454174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.915 [2024-12-09 05:20:26.454180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.916 [2024-12-09 05:20:26.466332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.916 [2024-12-09 05:20:26.466772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-12-09 05:20:26.466813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.916 [2024-12-09 05:20:26.466839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.916 [2024-12-09 05:20:26.467374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.916 [2024-12-09 05:20:26.467553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.916 [2024-12-09 05:20:26.467561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.916 [2024-12-09 05:20:26.467568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.916 [2024-12-09 05:20:26.467574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.916 [2024-12-09 05:20:26.479261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.916 [2024-12-09 05:20:26.479722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-12-09 05:20:26.479738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.916 [2024-12-09 05:20:26.479746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.916 [2024-12-09 05:20:26.479919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.916 [2024-12-09 05:20:26.480096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.916 [2024-12-09 05:20:26.480105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.916 [2024-12-09 05:20:26.480111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.916 [2024-12-09 05:20:26.480118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.916 [2024-12-09 05:20:26.492150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.916 [2024-12-09 05:20:26.492593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-12-09 05:20:26.492609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.916 [2024-12-09 05:20:26.492616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.916 [2024-12-09 05:20:26.492779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.916 [2024-12-09 05:20:26.492942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.916 [2024-12-09 05:20:26.492950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.916 [2024-12-09 05:20:26.492956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.916 [2024-12-09 05:20:26.492962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.916 [2024-12-09 05:20:26.505089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.916 [2024-12-09 05:20:26.505428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-12-09 05:20:26.505444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.916 [2024-12-09 05:20:26.505450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.916 [2024-12-09 05:20:26.505616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.916 [2024-12-09 05:20:26.505780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.916 [2024-12-09 05:20:26.505788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.916 [2024-12-09 05:20:26.505794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.916 [2024-12-09 05:20:26.505800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.916 [2024-12-09 05:20:26.518005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.916 [2024-12-09 05:20:26.518458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-12-09 05:20:26.518505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.916 [2024-12-09 05:20:26.518529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.916 [2024-12-09 05:20:26.519139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.916 [2024-12-09 05:20:26.519313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.916 [2024-12-09 05:20:26.519321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.916 [2024-12-09 05:20:26.519328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.916 [2024-12-09 05:20:26.519335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.916 [2024-12-09 05:20:26.530924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.916 [2024-12-09 05:20:26.531404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-12-09 05:20:26.531450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.916 [2024-12-09 05:20:26.531473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.916 [2024-12-09 05:20:26.532073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.916 [2024-12-09 05:20:26.532662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.916 [2024-12-09 05:20:26.532687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.916 [2024-12-09 05:20:26.532717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.916 [2024-12-09 05:20:26.532726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.916 [2024-12-09 05:20:26.544579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.916 [2024-12-09 05:20:26.545054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-12-09 05:20:26.545072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:49.916 [2024-12-09 05:20:26.545079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:49.916 [2024-12-09 05:20:26.545253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:49.916 [2024-12-09 05:20:26.545427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.916 [2024-12-09 05:20:26.545438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.916 [2024-12-09 05:20:26.545445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.916 [2024-12-09 05:20:26.545451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.177 [2024-12-09 05:20:26.557726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.177 [2024-12-09 05:20:26.558176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-12-09 05:20:26.558193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.177 [2024-12-09 05:20:26.558200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.177 [2024-12-09 05:20:26.558378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.177 [2024-12-09 05:20:26.558558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.177 [2024-12-09 05:20:26.558566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.177 [2024-12-09 05:20:26.558573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.177 [2024-12-09 05:20:26.558579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.177 [2024-12-09 05:20:26.570680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.177 [2024-12-09 05:20:26.571156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-12-09 05:20:26.571202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.177 [2024-12-09 05:20:26.571225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.177 [2024-12-09 05:20:26.571808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.177 [2024-12-09 05:20:26.572409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.177 [2024-12-09 05:20:26.572418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.177 [2024-12-09 05:20:26.572425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.177 [2024-12-09 05:20:26.572431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.177 [2024-12-09 05:20:26.584387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.177 [2024-12-09 05:20:26.584808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-12-09 05:20:26.584824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.177 [2024-12-09 05:20:26.584832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.177 [2024-12-09 05:20:26.585010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.177 [2024-12-09 05:20:26.585184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.177 [2024-12-09 05:20:26.585192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.177 [2024-12-09 05:20:26.585199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.177 [2024-12-09 05:20:26.585205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.177 [2024-12-09 05:20:26.597405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.177 [2024-12-09 05:20:26.597860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-12-09 05:20:26.597877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.177 [2024-12-09 05:20:26.597885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.177 [2024-12-09 05:20:26.598079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.177 [2024-12-09 05:20:26.598258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.177 [2024-12-09 05:20:26.598267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.177 [2024-12-09 05:20:26.598273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.177 [2024-12-09 05:20:26.598279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.177 [2024-12-09 05:20:26.610308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.177 [2024-12-09 05:20:26.610751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-12-09 05:20:26.610767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.177 [2024-12-09 05:20:26.610775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.177 [2024-12-09 05:20:26.610939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.177 [2024-12-09 05:20:26.611129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.177 [2024-12-09 05:20:26.611137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.177 [2024-12-09 05:20:26.611144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.177 [2024-12-09 05:20:26.611150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.177 [2024-12-09 05:20:26.623223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.177 [2024-12-09 05:20:26.623664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-12-09 05:20:26.623680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.177 [2024-12-09 05:20:26.623687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.177 [2024-12-09 05:20:26.623850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.177 [2024-12-09 05:20:26.624018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.177 [2024-12-09 05:20:26.624042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.177 [2024-12-09 05:20:26.624049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.177 [2024-12-09 05:20:26.624056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.177 [2024-12-09 05:20:26.636079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.177 [2024-12-09 05:20:26.636524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-12-09 05:20:26.636588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.177 [2024-12-09 05:20:26.636613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.177 [2024-12-09 05:20:26.637212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.177 [2024-12-09 05:20:26.637708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.177 [2024-12-09 05:20:26.637717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.177 [2024-12-09 05:20:26.637723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.177 [2024-12-09 05:20:26.637730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.177 [2024-12-09 05:20:26.649027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.177 [2024-12-09 05:20:26.649519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-12-09 05:20:26.649563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.177 [2024-12-09 05:20:26.649586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.177 [2024-12-09 05:20:26.650184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.177 [2024-12-09 05:20:26.650679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.177 [2024-12-09 05:20:26.650687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.177 [2024-12-09 05:20:26.650693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.177 [2024-12-09 05:20:26.650699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.177 [2024-12-09 05:20:26.661946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.177 [2024-12-09 05:20:26.662317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-12-09 05:20:26.662334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.177 [2024-12-09 05:20:26.662341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.177 [2024-12-09 05:20:26.662505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.177 [2024-12-09 05:20:26.662669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.177 [2024-12-09 05:20:26.662676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.177 [2024-12-09 05:20:26.662683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.178 [2024-12-09 05:20:26.662689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.178 [2024-12-09 05:20:26.674871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.178 [2024-12-09 05:20:26.675315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-12-09 05:20:26.675333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.178 [2024-12-09 05:20:26.675340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.178 [2024-12-09 05:20:26.675515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.178 [2024-12-09 05:20:26.675688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.178 [2024-12-09 05:20:26.675696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.178 [2024-12-09 05:20:26.675703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.178 [2024-12-09 05:20:26.675709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.178 [2024-12-09 05:20:26.687669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.178 [2024-12-09 05:20:26.687996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-12-09 05:20:26.688016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.178 [2024-12-09 05:20:26.688022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.178 [2024-12-09 05:20:26.688186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.178 [2024-12-09 05:20:26.688350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.178 [2024-12-09 05:20:26.688357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.178 [2024-12-09 05:20:26.688363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.178 [2024-12-09 05:20:26.688369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.178 [2024-12-09 05:20:26.700610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.178 [2024-12-09 05:20:26.701039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-12-09 05:20:26.701057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.178 [2024-12-09 05:20:26.701081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.178 [2024-12-09 05:20:26.701260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.178 [2024-12-09 05:20:26.701442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.178 [2024-12-09 05:20:26.701451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.178 [2024-12-09 05:20:26.701458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.178 [2024-12-09 05:20:26.701466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.178 [2024-12-09 05:20:26.713852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.178 [2024-12-09 05:20:26.714203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-12-09 05:20:26.714220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.178 [2024-12-09 05:20:26.714229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.178 [2024-12-09 05:20:26.714407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.178 [2024-12-09 05:20:26.714585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.178 [2024-12-09 05:20:26.714596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.178 [2024-12-09 05:20:26.714603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.178 [2024-12-09 05:20:26.714611] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.178 [2024-12-09 05:20:26.726711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.178 [2024-12-09 05:20:26.727159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-12-09 05:20:26.727205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.178 [2024-12-09 05:20:26.727228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.178 [2024-12-09 05:20:26.727823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.178 [2024-12-09 05:20:26.727987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.178 [2024-12-09 05:20:26.727995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.178 [2024-12-09 05:20:26.728006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.178 [2024-12-09 05:20:26.728012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.178 [2024-12-09 05:20:26.739777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.178 [2024-12-09 05:20:26.740247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-12-09 05:20:26.740263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.178 [2024-12-09 05:20:26.740271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.178 [2024-12-09 05:20:26.740443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.178 [2024-12-09 05:20:26.740621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.178 [2024-12-09 05:20:26.740629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.178 [2024-12-09 05:20:26.740635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.178 [2024-12-09 05:20:26.740641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.178 [2024-12-09 05:20:26.752664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.178 [2024-12-09 05:20:26.753081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-12-09 05:20:26.753098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.178 [2024-12-09 05:20:26.753105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.178 [2024-12-09 05:20:26.753268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.178 [2024-12-09 05:20:26.753430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.178 [2024-12-09 05:20:26.753438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.178 [2024-12-09 05:20:26.753444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.178 [2024-12-09 05:20:26.753450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.178 [2024-12-09 05:20:26.765529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.178 [2024-12-09 05:20:26.765979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-12-09 05:20:26.766031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.178 [2024-12-09 05:20:26.766055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.178 [2024-12-09 05:20:26.766638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.178 [2024-12-09 05:20:26.767138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.178 [2024-12-09 05:20:26.767147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.178 [2024-12-09 05:20:26.767153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.178 [2024-12-09 05:20:26.767160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.178 [2024-12-09 05:20:26.778451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.178 [2024-12-09 05:20:26.778892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-12-09 05:20:26.778934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.178 [2024-12-09 05:20:26.778958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.178 [2024-12-09 05:20:26.779558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.178 [2024-12-09 05:20:26.779871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.178 [2024-12-09 05:20:26.779879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.178 [2024-12-09 05:20:26.779885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.178 [2024-12-09 05:20:26.779892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.178 [2024-12-09 05:20:26.791311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.178 [2024-12-09 05:20:26.791695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-12-09 05:20:26.791711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.178 [2024-12-09 05:20:26.791718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.178 [2024-12-09 05:20:26.791891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.178 [2024-12-09 05:20:26.792075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.178 [2024-12-09 05:20:26.792084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.179 [2024-12-09 05:20:26.792091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.179 [2024-12-09 05:20:26.792097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.179 [2024-12-09 05:20:26.804145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.179 [2024-12-09 05:20:26.804617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-12-09 05:20:26.804668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.179 [2024-12-09 05:20:26.804692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.179 [2024-12-09 05:20:26.805175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.179 [2024-12-09 05:20:26.805432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.179 [2024-12-09 05:20:26.805443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.179 [2024-12-09 05:20:26.805452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.179 [2024-12-09 05:20:26.805462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.179 [2024-12-09 05:20:26.817634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.179 [2024-12-09 05:20:26.818094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-12-09 05:20:26.818112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.179 [2024-12-09 05:20:26.818119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.179 [2024-12-09 05:20:26.818298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.179 [2024-12-09 05:20:26.818477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.179 [2024-12-09 05:20:26.818485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.179 [2024-12-09 05:20:26.818492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.179 [2024-12-09 05:20:26.818499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.440 [2024-12-09 05:20:26.830523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.440 [2024-12-09 05:20:26.830961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.440 [2024-12-09 05:20:26.830977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.440 [2024-12-09 05:20:26.830984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.440 [2024-12-09 05:20:26.831176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.440 [2024-12-09 05:20:26.831349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.440 [2024-12-09 05:20:26.831357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.440 [2024-12-09 05:20:26.831364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.440 [2024-12-09 05:20:26.831370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.440 [2024-12-09 05:20:26.843327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.440 [2024-12-09 05:20:26.843775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.440 [2024-12-09 05:20:26.843819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.440 [2024-12-09 05:20:26.843842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.440 [2024-12-09 05:20:26.844302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.440 [2024-12-09 05:20:26.844476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.440 [2024-12-09 05:20:26.844484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.440 [2024-12-09 05:20:26.844491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.440 [2024-12-09 05:20:26.844497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3731790 Killed "${NVMF_APP[@]}" "$@" 00:25:50.440 05:20:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:50.440 05:20:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:50.440 05:20:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:50.440 05:20:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:50.440 05:20:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:50.440 [2024-12-09 05:20:26.856392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.440 [2024-12-09 05:20:26.856765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.440 [2024-12-09 05:20:26.856782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.440 [2024-12-09 05:20:26.856790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.440 [2024-12-09 05:20:26.856968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.440 [2024-12-09 05:20:26.857152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.440 [2024-12-09 05:20:26.857161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.440 [2024-12-09 05:20:26.857168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.440 [2024-12-09 05:20:26.857174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.440 05:20:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3732976 00:25:50.440 05:20:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3732976 00:25:50.440 05:20:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:50.440 05:20:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3732976 ']' 00:25:50.440 05:20:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.440 05:20:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.440 05:20:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.440 05:20:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.440 05:20:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:50.440 [2024-12-09 05:20:26.869603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.440 [2024-12-09 05:20:26.869988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.440 [2024-12-09 05:20:26.870033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.440 [2024-12-09 05:20:26.870045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.440 [2024-12-09 05:20:26.870224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.440 [2024-12-09 05:20:26.870402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.440 [2024-12-09 05:20:26.870410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.440 [2024-12-09 05:20:26.870416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.440 [2024-12-09 05:20:26.870423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.440 [2024-12-09 05:20:26.882686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.440 [2024-12-09 05:20:26.883048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.440 [2024-12-09 05:20:26.883066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.440 [2024-12-09 05:20:26.883073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.440 [2024-12-09 05:20:26.883252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.440 [2024-12-09 05:20:26.883431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.440 [2024-12-09 05:20:26.883440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.440 [2024-12-09 05:20:26.883446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.440 [2024-12-09 05:20:26.883453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.440 [2024-12-09 05:20:26.895829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.440 [2024-12-09 05:20:26.896285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.440 [2024-12-09 05:20:26.896302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.440 [2024-12-09 05:20:26.896310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.440 [2024-12-09 05:20:26.896487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.440 [2024-12-09 05:20:26.896667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.440 [2024-12-09 05:20:26.896675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.440 [2024-12-09 05:20:26.896682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.440 [2024-12-09 05:20:26.896688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.440 [2024-12-09 05:20:26.905258] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:25:50.440 [2024-12-09 05:20:26.905297] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.440 [2024-12-09 05:20:26.909013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.440 [2024-12-09 05:20:26.909471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.440 [2024-12-09 05:20:26.909489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.440 [2024-12-09 05:20:26.909502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.440 [2024-12-09 05:20:26.909680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.440 [2024-12-09 05:20:26.909858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.440 [2024-12-09 05:20:26.909866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.440 [2024-12-09 05:20:26.909873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.440 [2024-12-09 05:20:26.909880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.440 [2024-12-09 05:20:26.922172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.440 [2024-12-09 05:20:26.922632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.440 [2024-12-09 05:20:26.922649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.440 [2024-12-09 05:20:26.922657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.441 [2024-12-09 05:20:26.922836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.441 [2024-12-09 05:20:26.923019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.441 [2024-12-09 05:20:26.923028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.441 [2024-12-09 05:20:26.923035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.441 [2024-12-09 05:20:26.923041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.441 [2024-12-09 05:20:26.935228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.441 [2024-12-09 05:20:26.935658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.441 [2024-12-09 05:20:26.935676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.441 [2024-12-09 05:20:26.935684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.441 [2024-12-09 05:20:26.935864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.441 [2024-12-09 05:20:26.936047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.441 [2024-12-09 05:20:26.936056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.441 [2024-12-09 05:20:26.936063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.441 [2024-12-09 05:20:26.936069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.441 [2024-12-09 05:20:26.948326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.441 [2024-12-09 05:20:26.948802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.441 [2024-12-09 05:20:26.948819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.441 [2024-12-09 05:20:26.948827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.441 [2024-12-09 05:20:26.949010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.441 [2024-12-09 05:20:26.949192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.441 [2024-12-09 05:20:26.949201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.441 [2024-12-09 05:20:26.949208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.441 [2024-12-09 05:20:26.949214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.441 [2024-12-09 05:20:26.961365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.441 [2024-12-09 05:20:26.961768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.441 [2024-12-09 05:20:26.961786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.441 [2024-12-09 05:20:26.961794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.441 [2024-12-09 05:20:26.961972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.441 [2024-12-09 05:20:26.962155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.441 [2024-12-09 05:20:26.962165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.441 [2024-12-09 05:20:26.962173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.441 [2024-12-09 05:20:26.962180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.441 [2024-12-09 05:20:26.974253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:50.441 [2024-12-09 05:20:26.974443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.441 [2024-12-09 05:20:26.974869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.441 [2024-12-09 05:20:26.974885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.441 [2024-12-09 05:20:26.974894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.441 [2024-12-09 05:20:26.975078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.441 [2024-12-09 05:20:26.975257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.441 [2024-12-09 05:20:26.975266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.441 [2024-12-09 05:20:26.975274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.441 [2024-12-09 05:20:26.975281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.441 [2024-12-09 05:20:26.987540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.441 [2024-12-09 05:20:26.987866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.441 [2024-12-09 05:20:26.987885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.441 [2024-12-09 05:20:26.987893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.441 [2024-12-09 05:20:26.988077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.441 [2024-12-09 05:20:26.988257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.441 [2024-12-09 05:20:26.988265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.441 [2024-12-09 05:20:26.988277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.441 [2024-12-09 05:20:26.988284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.441 [2024-12-09 05:20:27.000713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.441 [2024-12-09 05:20:27.001146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.441 [2024-12-09 05:20:27.001164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.441 [2024-12-09 05:20:27.001172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.441 [2024-12-09 05:20:27.001353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.441 [2024-12-09 05:20:27.001532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.441 [2024-12-09 05:20:27.001541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.441 [2024-12-09 05:20:27.001548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.441 [2024-12-09 05:20:27.001554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.441 [2024-12-09 05:20:27.013875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.441 [2024-12-09 05:20:27.014253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.441 [2024-12-09 05:20:27.014271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.441 [2024-12-09 05:20:27.014279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.441 [2024-12-09 05:20:27.014457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.441 [2024-12-09 05:20:27.014637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.441 [2024-12-09 05:20:27.014645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.441 [2024-12-09 05:20:27.014653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.441 [2024-12-09 05:20:27.014660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.441 [2024-12-09 05:20:27.017294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.441 [2024-12-09 05:20:27.017320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.441 [2024-12-09 05:20:27.017327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.441 [2024-12-09 05:20:27.017335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.441 [2024-12-09 05:20:27.017341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.441 [2024-12-09 05:20:27.018758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:50.441 [2024-12-09 05:20:27.018841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:50.441 [2024-12-09 05:20:27.018842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.441 [2024-12-09 05:20:27.026975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.441 [2024-12-09 05:20:27.027302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.441 [2024-12-09 05:20:27.027322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.441 [2024-12-09 05:20:27.027336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.441 [2024-12-09 05:20:27.027515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.441 [2024-12-09 05:20:27.027695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.441 [2024-12-09 05:20:27.027703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.441 [2024-12-09 05:20:27.027711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.441 [2024-12-09 05:20:27.027718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.441 [2024-12-09 05:20:27.040152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.441 [2024-12-09 05:20:27.040480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.441 [2024-12-09 05:20:27.040500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.441 [2024-12-09 05:20:27.040509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.442 [2024-12-09 05:20:27.040689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.442 [2024-12-09 05:20:27.040869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.442 [2024-12-09 05:20:27.040878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.442 [2024-12-09 05:20:27.040886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.442 [2024-12-09 05:20:27.040895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.442 [2024-12-09 05:20:27.053321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.442 [2024-12-09 05:20:27.053698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-12-09 05:20:27.053719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.442 [2024-12-09 05:20:27.053728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.442 [2024-12-09 05:20:27.053907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.442 [2024-12-09 05:20:27.054091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.442 [2024-12-09 05:20:27.054100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.442 [2024-12-09 05:20:27.054107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.442 [2024-12-09 05:20:27.054115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.442 [2024-12-09 05:20:27.066541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.442 [2024-12-09 05:20:27.066893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-12-09 05:20:27.066914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.442 [2024-12-09 05:20:27.066922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.442 [2024-12-09 05:20:27.067107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.442 [2024-12-09 05:20:27.067293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.442 [2024-12-09 05:20:27.067302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.442 [2024-12-09 05:20:27.067310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.442 [2024-12-09 05:20:27.067317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.442 [2024-12-09 05:20:27.079743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.442 [2024-12-09 05:20:27.080079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-12-09 05:20:27.080101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.442 [2024-12-09 05:20:27.080110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.442 [2024-12-09 05:20:27.080289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.442 [2024-12-09 05:20:27.080468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.442 [2024-12-09 05:20:27.080476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.442 [2024-12-09 05:20:27.080483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.442 [2024-12-09 05:20:27.080491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.701 [2024-12-09 05:20:27.092927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.701 [2024-12-09 05:20:27.093297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.701 [2024-12-09 05:20:27.093315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.701 [2024-12-09 05:20:27.093324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.701 [2024-12-09 05:20:27.093502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.701 [2024-12-09 05:20:27.093683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.701 [2024-12-09 05:20:27.093691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.701 [2024-12-09 05:20:27.093698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.701 [2024-12-09 05:20:27.093705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.701 [2024-12-09 05:20:27.106128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.701 [2024-12-09 05:20:27.106487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.702 [2024-12-09 05:20:27.106503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.702 [2024-12-09 05:20:27.106511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.702 [2024-12-09 05:20:27.106688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.702 [2024-12-09 05:20:27.106867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.702 [2024-12-09 05:20:27.106876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.702 [2024-12-09 05:20:27.106887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.702 [2024-12-09 05:20:27.106894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.702 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.702 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:50.702 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.702 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.702 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:50.702 [2024-12-09 05:20:27.119192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.702 [2024-12-09 05:20:27.119516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.702 [2024-12-09 05:20:27.119534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.702 [2024-12-09 05:20:27.119542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.702 [2024-12-09 05:20:27.119721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.702 [2024-12-09 05:20:27.119900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.702 [2024-12-09 05:20:27.119909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.702 [2024-12-09 05:20:27.119916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.702 [2024-12-09 05:20:27.119923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.702 [2024-12-09 05:20:27.132360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.702 [2024-12-09 05:20:27.132719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.702 [2024-12-09 05:20:27.132737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.702 [2024-12-09 05:20:27.132745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.702 [2024-12-09 05:20:27.132923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.702 [2024-12-09 05:20:27.133108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.702 [2024-12-09 05:20:27.133116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.702 [2024-12-09 05:20:27.133123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.702 [2024-12-09 05:20:27.133129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.702 [2024-12-09 05:20:27.145565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.702 [2024-12-09 05:20:27.145916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.702 [2024-12-09 05:20:27.145933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.702 [2024-12-09 05:20:27.145941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.702 [2024-12-09 05:20:27.146123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.702 [2024-12-09 05:20:27.146303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.702 [2024-12-09 05:20:27.146317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.702 [2024-12-09 05:20:27.146323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.702 [2024-12-09 05:20:27.146330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.702 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.702 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:50.702 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.702 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:50.702 [2024-12-09 05:20:27.158769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.702 [2024-12-09 05:20:27.159088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.702 [2024-12-09 05:20:27.159106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.702 [2024-12-09 05:20:27.159113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.702 [2024-12-09 05:20:27.159291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.702 [2024-12-09 05:20:27.159470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.702 [2024-12-09 05:20:27.159478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.702 [2024-12-09 05:20:27.159485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.702 [2024-12-09 05:20:27.159492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.702 [2024-12-09 05:20:27.160292] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.702 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.702 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:50.702 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.702 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:50.702 [2024-12-09 05:20:27.171930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.702 [2024-12-09 05:20:27.172288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.702 [2024-12-09 05:20:27.172306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.703 [2024-12-09 05:20:27.172314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.703 [2024-12-09 05:20:27.172492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.703 [2024-12-09 05:20:27.172671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.703 [2024-12-09 05:20:27.172679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.703 [2024-12-09 05:20:27.172686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.703 [2024-12-09 05:20:27.172693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.703 [2024-12-09 05:20:27.185110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.703 [2024-12-09 05:20:27.185470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.703 [2024-12-09 05:20:27.185487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.703 [2024-12-09 05:20:27.185494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.703 [2024-12-09 05:20:27.185673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.703 [2024-12-09 05:20:27.185852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.703 [2024-12-09 05:20:27.185860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.703 [2024-12-09 05:20:27.185867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.703 [2024-12-09 05:20:27.185873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.703 [2024-12-09 05:20:27.198173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.703 [2024-12-09 05:20:27.198590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.703 [2024-12-09 05:20:27.198608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.703 [2024-12-09 05:20:27.198616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.703 [2024-12-09 05:20:27.198795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.703 [2024-12-09 05:20:27.198975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.703 [2024-12-09 05:20:27.198984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.703 [2024-12-09 05:20:27.198990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.703 [2024-12-09 05:20:27.199004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.703 Malloc0 00:25:50.703 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.703 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:50.703 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.703 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:50.703 [2024-12-09 05:20:27.211267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.703 [2024-12-09 05:20:27.211627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.703 [2024-12-09 05:20:27.211644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.703 [2024-12-09 05:20:27.211651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.703 [2024-12-09 05:20:27.211829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.703 [2024-12-09 05:20:27.212012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.703 [2024-12-09 05:20:27.212021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.703 [2024-12-09 05:20:27.212028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.703 [2024-12-09 05:20:27.212034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.703 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.703 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:50.703 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.703 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:50.703 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.703 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.703 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.703 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:50.703 [2024-12-09 05:20:27.224472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.703 [2024-12-09 05:20:27.224880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.703 [2024-12-09 05:20:27.224897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1048510 with addr=10.0.0.2, port=4420 00:25:50.703 [2024-12-09 05:20:27.224905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048510 is same with the state(6) to be set 00:25:50.703 [2024-12-09 05:20:27.225088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1048510 (9): Bad file descriptor 00:25:50.703 [2024-12-09 05:20:27.225253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.703 [2024-12-09 05:20:27.225266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.703 [2024-12-09 05:20:27.225276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.703 [2024-12-09 05:20:27.225284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.703 [2024-12-09 05:20:27.225292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.703 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.703 05:20:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3732050 00:25:50.703 [2024-12-09 05:20:27.237545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.703 [2024-12-09 05:20:27.302135] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:25:52.082 4536.83 IOPS, 17.72 MiB/s [2024-12-09T04:20:29.668Z] 5418.71 IOPS, 21.17 MiB/s [2024-12-09T04:20:30.603Z] 6081.62 IOPS, 23.76 MiB/s [2024-12-09T04:20:31.536Z] 6610.33 IOPS, 25.82 MiB/s [2024-12-09T04:20:32.471Z] 7017.00 IOPS, 27.41 MiB/s [2024-12-09T04:20:33.404Z] 7344.00 IOPS, 28.69 MiB/s [2024-12-09T04:20:34.783Z] 7630.08 IOPS, 29.81 MiB/s [2024-12-09T04:20:35.718Z] 7864.15 IOPS, 30.72 MiB/s [2024-12-09T04:20:36.654Z] 8073.57 IOPS, 31.54 MiB/s [2024-12-09T04:20:36.654Z] 8236.73 IOPS, 32.17 MiB/s 00:26:00.008 Latency(us) 00:26:00.008 [2024-12-09T04:20:36.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.008 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:00.008 Verification LBA range: start 0x0 length 0x4000 00:26:00.008 Nvme1n1 : 15.01 8237.76 32.18 10925.15 0.00 6659.58 612.62 18350.08 00:26:00.008 [2024-12-09T04:20:36.654Z] =================================================================================================================== 00:26:00.008 [2024-12-09T04:20:36.654Z] Total : 8237.76 32.18 10925.15 0.00 6659.58 612.62 18350.08 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:00.008 rmmod nvme_tcp 00:26:00.008 rmmod nvme_fabrics 00:26:00.008 rmmod nvme_keyring 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3732976 ']' 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3732976 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3732976 ']' 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3732976 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:00.008 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3732976 00:26:00.275 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:00.275 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:00.275 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3732976' 00:26:00.275 killing process with pid 3732976 00:26:00.275 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3732976 00:26:00.275 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3732976 00:26:00.275 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:00.275 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:00.275 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:00.275 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:26:00.275 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:00.275 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:26:00.275 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:26:00.535 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:00.535 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:00.535 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.535 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:00.535 05:20:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.443 05:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:02.443 00:26:02.443 real 0m25.954s 00:26:02.443 user 1m1.226s 00:26:02.443 sys 0m6.541s 00:26:02.443 05:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:02.443 05:20:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:02.443 ************************************ 00:26:02.443 END TEST nvmf_bdevperf 00:26:02.443 ************************************ 00:26:02.443 05:20:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:02.443 05:20:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:02.443 05:20:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:02.443 05:20:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.443 ************************************ 00:26:02.443 START TEST nvmf_target_disconnect 00:26:02.443 ************************************ 00:26:02.443 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:02.704 * Looking for test storage... 00:26:02.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:02.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.704 --rc genhtml_branch_coverage=1 00:26:02.704 --rc genhtml_function_coverage=1 00:26:02.704 --rc genhtml_legend=1 00:26:02.704 --rc geninfo_all_blocks=1 00:26:02.704 --rc geninfo_unexecuted_blocks=1 00:26:02.704 00:26:02.704 ' 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:02.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.704 --rc genhtml_branch_coverage=1 00:26:02.704 --rc genhtml_function_coverage=1 00:26:02.704 --rc genhtml_legend=1 00:26:02.704 --rc geninfo_all_blocks=1 00:26:02.704 --rc geninfo_unexecuted_blocks=1 00:26:02.704 00:26:02.704 ' 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:02.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.704 --rc genhtml_branch_coverage=1 00:26:02.704 --rc genhtml_function_coverage=1 00:26:02.704 --rc genhtml_legend=1 00:26:02.704 --rc geninfo_all_blocks=1 00:26:02.704 --rc geninfo_unexecuted_blocks=1 00:26:02.704 00:26:02.704 ' 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:02.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.704 --rc genhtml_branch_coverage=1 00:26:02.704 --rc genhtml_function_coverage=1 00:26:02.704 --rc genhtml_legend=1 00:26:02.704 --rc geninfo_all_blocks=1 00:26:02.704 --rc geninfo_unexecuted_blocks=1 00:26:02.704 00:26:02.704 ' 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.704 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:02.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:26:02.705 05:20:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:07.975 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:07.975 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:07.975 Found net devices under 0000:86:00.0: cvl_0_0 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.975 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:07.976 Found net devices under 0000:86:00.1: cvl_0_1 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:07.976 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:08.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:08.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:26:08.235 00:26:08.235 --- 10.0.0.2 ping statistics --- 00:26:08.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.235 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:08.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:08.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:26:08.235 00:26:08.235 --- 10.0.0.1 ping statistics --- 00:26:08.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.235 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:08.235 ************************************ 00:26:08.235 START TEST nvmf_target_disconnect_tc1 00:26:08.235 ************************************ 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:08.235 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:08.495 [2024-12-09 05:20:44.940536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.495 [2024-12-09 05:20:44.940652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2299ac0 with addr=10.0.0.2, port=4420 00:26:08.495 [2024-12-09 05:20:44.940715] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:08.495 [2024-12-09 05:20:44.940744] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:08.495 [2024-12-09 05:20:44.940764] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:26:08.495 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:08.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:08.495 Initializing NVMe Controllers 00:26:08.495 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:26:08.495 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:08.495 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:08.495 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:08.495 00:26:08.495 real 0m0.147s 00:26:08.495 user 0m0.086s 00:26:08.495 sys 0m0.061s 00:26:08.495 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:08.495 05:20:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:08.495 ************************************ 00:26:08.495 END TEST nvmf_target_disconnect_tc1 00:26:08.495 ************************************ 00:26:08.495 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:08.495 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:08.495 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:08.495 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:08.495 ************************************ 00:26:08.495 START TEST nvmf_target_disconnect_tc2 00:26:08.495 ************************************ 00:26:08.495 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:26:08.495 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:08.495 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:08.495 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:08.495 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:08.495 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:08.495 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3738139 00:26:08.495 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3738139 00:26:08.495 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:08.495 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3738139 ']' 00:26:08.495 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.495 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:08.495 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.495 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:08.495 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:08.495 [2024-12-09 05:20:45.119751] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:26:08.495 [2024-12-09 05:20:45.119799] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.753 [2024-12-09 05:20:45.201661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:08.753 [2024-12-09 05:20:45.245302] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.753 [2024-12-09 05:20:45.245339] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.753 [2024-12-09 05:20:45.245346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:08.753 [2024-12-09 05:20:45.245352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:08.753 [2024-12-09 05:20:45.245358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.753 [2024-12-09 05:20:45.247042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:08.753 [2024-12-09 05:20:45.247161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:08.753 [2024-12-09 05:20:45.247270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:08.753 [2024-12-09 05:20:45.247271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:09.689 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.689 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:09.689 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:09.689 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:09.689 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:09.689 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:09.689 05:20:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:09.689 Malloc0 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:09.689 [2024-12-09 05:20:46.040310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:09.689 [2024-12-09 05:20:46.068543] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3738388 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:09.689 05:20:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:11.605 05:20:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3738139 00:26:11.605 05:20:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 [2024-12-09 05:20:48.097026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 [2024-12-09 05:20:48.097227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 [2024-12-09 05:20:48.097425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Write completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 Read completed with error (sct=0, sc=8) 00:26:11.605 starting I/O failed 00:26:11.605 [2024-12-09 05:20:48.097616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:11.605 [2024-12-09 05:20:48.097806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-09 05:20:48.097823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-09 05:20:48.098092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-09 05:20:48.098103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-09 05:20:48.098268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-09 05:20:48.098279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-09 05:20:48.098433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-09 05:20:48.098444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-09 05:20:48.098634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-09 05:20:48.098645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-09 05:20:48.098814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-09 05:20:48.098825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-09 05:20:48.098935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-09 05:20:48.098944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-09 05:20:48.099040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-09 05:20:48.099050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-09 05:20:48.099166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-09 05:20:48.099176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-09 05:20:48.099336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-09 05:20:48.099347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-09 05:20:48.099515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-09 05:20:48.099525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-09 05:20:48.099625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.605 [2024-12-09 05:20:48.099634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.605 qpair failed and we were unable to recover it. 00:26:11.605 [2024-12-09 05:20:48.099817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.099827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.099926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.099935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.100039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.100049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.100143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.100152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.100265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.100276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.100352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.100361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.100441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.100451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.100605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.100616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.100843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.100873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.101017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.101049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.101251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.101283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.101489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.101521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.101667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.101698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.101890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.101921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.102084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.102118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.102247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.102257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.102428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.102439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.102620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.102644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.102791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.102802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.102892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.102902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.103067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.103078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.103154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.103163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.103250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.103259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.103411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.103422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.103579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.103589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.103676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.103685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.103758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.103768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.103912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.103923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.104090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.104101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.104159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.104168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.104309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.104326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.104472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.104483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.104583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.104592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.104683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.104692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.104848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.104858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.104967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.104978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.105138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.105149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.105243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.105253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.105343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.105352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.105566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.105579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.105678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.105690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.105911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.105925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.105994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.106013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.106094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.106107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.106262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.106276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.106448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.106462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.106568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.106582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.106827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.106840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.106951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.106965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.107089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.107103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.107250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.107264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.107362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.107375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.107540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.107554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.107662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.107676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.107846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.107860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.108019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.108033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.108199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.108213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.108485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.108507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.108675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.108687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.108897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.108908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.109171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.109184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.109347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.109357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.109597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.606 [2024-12-09 05:20:48.109607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.606 qpair failed and we were unable to recover it. 00:26:11.606 [2024-12-09 05:20:48.109778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.109789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.109874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.109883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.109977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.109986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.110148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.110159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.110309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.110319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.110528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.110538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.110697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.110707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.110798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.110811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.110968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.110978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.111140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.111151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.111241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.111250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.111413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.111424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.111610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.111641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.111782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.111813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.112009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.112041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.112234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.112245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.112393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.112427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.112635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.112668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.112792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.112824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.113025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.113036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.113134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.113143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.113231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.113241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.113321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.113330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.113478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.113489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.113651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.113661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.113758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.113767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.113978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.113988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.114146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.114157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.114300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.114310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.114459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.114470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.114574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.114585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.114733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.114743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.114839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.114848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.115011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.115022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.115258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.115319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.115538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.115574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.115727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.115759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.115950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.115982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.116137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.116170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.116399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.116430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.116687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.116719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.116936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.116968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.117104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.117137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.117339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.117353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.117527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.117559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.117697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.117728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.118020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.118054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.118229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.118244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.118501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.118533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.118675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.118706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.118892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.118924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.119069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.119084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.119200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.119214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.119300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.119314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.119396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.119410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.607 [2024-12-09 05:20:48.119631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.607 [2024-12-09 05:20:48.119645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.607 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.119792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.119806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.119969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.119984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.120084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.120097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.120241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.120252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.120337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.120346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.120580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.120591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.120701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.120711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.120798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.120807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.120959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.120969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.121115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.121126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.121278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.121288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.121391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.121401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.121557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.121568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.121674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.121684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.121782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.121793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.121959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.121969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.122127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.122138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.122235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.122245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.122350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.122363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.122519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.122529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.122674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.122684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.122828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.122838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.122995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.123009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.123191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.123201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.123404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.123436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.123656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.123688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.123885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.123916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.124109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.124119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.124207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.124216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.124449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.124459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.124553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.124563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.124648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.124658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.124740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.124750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.124904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.124915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.125092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.125103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.125291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.125321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.125456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.125487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.125791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.125822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.126018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.126051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.126243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.126254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.126425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.126455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.126642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.126674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.126954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.126985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.127261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.127292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.127439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.127470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.127706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.127717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.127893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.127904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.128050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.128091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.128312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.128345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.128491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.128522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.128705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.128736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.128889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.128921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.129071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.129104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.129376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.129407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.129607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.129639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.129840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.129871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.130016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.130049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.130171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.130201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.130383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.130396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.130535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.130546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.130675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.130706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.130979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.131020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.608 [2024-12-09 05:20:48.131215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.608 [2024-12-09 05:20:48.131225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.608 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.131316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.131325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.131465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.131476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.131586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.131596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.131738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.131748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.131903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.131914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.132057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.132068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.132213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.132224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.132371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.132381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.132485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.132495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.132584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.132593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.132686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.132696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.132797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.132807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.133016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.133027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.133120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.133130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.133226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.133236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.133339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.133349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.133502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.133512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.133760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.133770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.133850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.133860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.134017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.134027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.134182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.134193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.134348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.134359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.134550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.134580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.134771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.134803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.135010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.135042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.135190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.135200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.135299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.135310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.135543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.135553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.135644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.135654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.135909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.135940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.136089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.136122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.136260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.136291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.136517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.136548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.136751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.136783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.136980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.137026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.137285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.137323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.137510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.137541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.137757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.137789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.137908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.137919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.138022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.138033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.138162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.138173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.138246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.138255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.138462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.138473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.138554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.138563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.138708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.138718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.138925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.138936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.139022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.139033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.139174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.139186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.139417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.139428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.139517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.139526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.139607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.139617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.139849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.139860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.140115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.140148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.140441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.140473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.140617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.140648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.140790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.140821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.141012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.141044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.141250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.141281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.141478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.141509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.141788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.141819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.142098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.142109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.142253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.142263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.142410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.609 [2024-12-09 05:20:48.142420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.609 qpair failed and we were unable to recover it. 00:26:11.609 [2024-12-09 05:20:48.142647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.142658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.142890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.142901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.143064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.143075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.143180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.143190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.143357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.143368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.143600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.143631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.143766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.143797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.143985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.144026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.144205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.144215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.144291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.144300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.144379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.144389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.144547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.144557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.144737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.144750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.144989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.145029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.145162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.145194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.145378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.145409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.145545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.145577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.145726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.145756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.145951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.145983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.146126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.146157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.146340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.146350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.146525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.146556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.146748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.146779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.146977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.147020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.147208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.147240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.147380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.147411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.147673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.147705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.148035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.148067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.148315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.148326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.148425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.148434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.148511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.148521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.148749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.148759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.148851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.148861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.149019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.149030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.149123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.149133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.149287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.149298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.149399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.149409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.149638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.149648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.149805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.149816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.149993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.150051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.150188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.150221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.150450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.150481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.150611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.150642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.150850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.150881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.151016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.151048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.151185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.151195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.151402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.151413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.151595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.151606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.151680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.151689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.151852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.151862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.151951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.151960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.152143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.152154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.152305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.152343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.152558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.152589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.152722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.152753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.152894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.152926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.153114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.153147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.153301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.153333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.610 qpair failed and we were unable to recover it. 00:26:11.610 [2024-12-09 05:20:48.153530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.610 [2024-12-09 05:20:48.153561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.153814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.153824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.154084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.154095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.154303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.154314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.154547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.154558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.154700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.154711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.154790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.154799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.154950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.154960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.155197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.155230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.155364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.155395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.155539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.155571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.155721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.155752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.155943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.155975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.156185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.156216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.156401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.156433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.156727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.156759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.156942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.156973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.157194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.157226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.157438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.157469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.157689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.157719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.157846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.157878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.158156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.158167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.158374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.158384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.158592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.158603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.158754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.158764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.158868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.158878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.159014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.159025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.159168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.159179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.159332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.159342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.159536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.159568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.159689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.159720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.159827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.159858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.160041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.160075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.160195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.160226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.160480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.160517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.160758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.160769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.160854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.160863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.160959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.160969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.161110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.161121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.161268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.161278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.161370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.161380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.161585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.161619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.161827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.161859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.162066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.162099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.162273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.162283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.162420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.162431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.162628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.162658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.162928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.162960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.163202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.163214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.163324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.163334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.163410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.163420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.163556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.163566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.163667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.163680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.163834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.163845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.164050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.164083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.164208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.164240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.164449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.164480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.164626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.164657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.164881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.164912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.165178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.165223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.165302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.165312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.165409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.165418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.165490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.165499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.165574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.611 [2024-12-09 05:20:48.165585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.611 qpair failed and we were unable to recover it. 00:26:11.611 [2024-12-09 05:20:48.165743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.165753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.165831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.165841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.166007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.166018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.166141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.166161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.166272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.166282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.166376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.166385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.166465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.166475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.166618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.166629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.166837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.166848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.166931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.166940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.167010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.167023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.167186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.167196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.167407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.167417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.167562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.167583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.167735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.167756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.167988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.168005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.168181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.168192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.168429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.168439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.168526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.168536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.168639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.168648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.168829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.168840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.169005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.169017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.169174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.169184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.169291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.169301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.169477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.169487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.169571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.169581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.169770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.169781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.169861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.169871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.170033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.170044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.170182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.170193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.170256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.170265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.170358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.170368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.170523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.170534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.170696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.170706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.170892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.170903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.170986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.170996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.171058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.171067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.171161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.171170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.171338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.171347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.171513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.171523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.171673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.171684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.171833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.171844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.172020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.172031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.172107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.172117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.172211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.172221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.172428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.172438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.172580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.172591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.172754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.172775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.173008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.173019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.173144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.173155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.173341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.173354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.173469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.173479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.173676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.173687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.173923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.173933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.174142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.174153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.174345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.174377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.174509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.174541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.174744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.174775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.175050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.175082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.175208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.175239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.175380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.175411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.175696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.175707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.175799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.175810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.175902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.612 [2024-12-09 05:20:48.175912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.612 qpair failed and we were unable to recover it. 00:26:11.612 [2024-12-09 05:20:48.176087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.176098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.176308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.176319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.176412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.176423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.176571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.176581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.176658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.176667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.176810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.176820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.176916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.176927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.177006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.177016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.177170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.177180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.177360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.177391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.177576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.177606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.177831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.177863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.178086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.178118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.178375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.178406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.178596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.178606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.178838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.178849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.179077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.179087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.179183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.179193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.179353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.179363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.179531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.179563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.179698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.179730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.179984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.180026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.180173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.180183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.180425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.180456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.180574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.180606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.180825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.180856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.180992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.181075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.181222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.181233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.181402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.181412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.181628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.181639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.181818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.181829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.182044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.182077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.182280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.182311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.182497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.182528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.182794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.182825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.182948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.182979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.183254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.183286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.183538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.183569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.183791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.183822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.183966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.183997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.184207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.184228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.184420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.184452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.184725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.184757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.184984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.185024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.185146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.185177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.185378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.185410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.185684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.185714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.185828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.185861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.186071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.186104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.186291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.186322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.186519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.186529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.186749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.186759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.186857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.186867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.187024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.187034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.187193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.187203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.187365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.187375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.187536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.187547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.613 [2024-12-09 05:20:48.187627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.613 [2024-12-09 05:20:48.187637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.613 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.187885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.187895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.188036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.188047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.188140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.188149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.188253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.188263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.188366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.188376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.188486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.188497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.188668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.188679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.188767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.188777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.188917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.188929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.189085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.189107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.189262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.189272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.189443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.189453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.189627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.189638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.189737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.189747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.189914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.189924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.190080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.190091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.190239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.190250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.190339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.190348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.190496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.190506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.190638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.190648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.190706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.190716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.190786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.190796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.190891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.190901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.191051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.191061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.191205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.191215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.191378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.191388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.191547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.191557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.191712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.191723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.192051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.192085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.192283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.192315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.192509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.192540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.192811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.192844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.193098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.193131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.193328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.193359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.193559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.193590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.193872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.193904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.194120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.194131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.194304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.194314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.194578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.194608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.194817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.194848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.195104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.195137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.195341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.195373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.195587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.195597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.195829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.195839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.196048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.196059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.196142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.196152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.196293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.196307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.196404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.196417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.196580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.196597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.196842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.196865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.197024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.197039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.197193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.197204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.197295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.197305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.197464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.197475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.197624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.197634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.197800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.197810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.197960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.197971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.198053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.198063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.198222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.198232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.198315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.198325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.198409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.198418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.198600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.198611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.198722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.198733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.614 qpair failed and we were unable to recover it. 00:26:11.614 [2024-12-09 05:20:48.198905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.614 [2024-12-09 05:20:48.198916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.199075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.199086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.199184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.199193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.199368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.199379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.199570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.199601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.199746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.199776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.199923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.199954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.200174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.200206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.200493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.200503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.200586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.200606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.200709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.200720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.200859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.200870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.201017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.201027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.201203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.201213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.201308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.201318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.201395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.201404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.201568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.201578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.201725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.201735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.201827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.201836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.201922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.201932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.202085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.202095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.202185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.202194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.202332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.202342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.202444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.202454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.202664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.202674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.202750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.202761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.202942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.202952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.203095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.203107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.203222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.203232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.203372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.203382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.203525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.203536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.203641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.203651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.203711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.203721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.203946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.203977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.204196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.204228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.204478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.204509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.204763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.204794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.205075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.205086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.205247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.205257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.205416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.205426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.205508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.205518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.205596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.205605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.205702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.205712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.205863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.205873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.206026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.206037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.206142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.206152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.206238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.206248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.206394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.206405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.206507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.206518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.206733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.206744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.206963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.206995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.207261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.207292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.207535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.207571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.207694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.207710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.207889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.207904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.208014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.208030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.208140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.208154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.208253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.208267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.208372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.208386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.208570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.208585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.208700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.208714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.208824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.208838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.209023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.209039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.209148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.615 [2024-12-09 05:20:48.209162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.615 qpair failed and we were unable to recover it. 00:26:11.615 [2024-12-09 05:20:48.209262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.209277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.209378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.209392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.209557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.209572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.209659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.209672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.209761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.209776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.209867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.209881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.210032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.210047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.210219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.210234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.210341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.210355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.210442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.210454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.210616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.210630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.210714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.210727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.210900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.210914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.211075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.211090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.211196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.211209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.211366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.211381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.211541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.211580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.211772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.211805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.211941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.211971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.212104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.212137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.212295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.212310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.212460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.212474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.212700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.212715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.212885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.212900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.213067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.213082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.213267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.213299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.213551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.213582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.213780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.213810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.214036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.214074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.214198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.214228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.214365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.214379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.214544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.214559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.214743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.214757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.214925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.214940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.215104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.215148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.215347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.215380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.215576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.215607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.215747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.215761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.215917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.215932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.216206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.216221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.216370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.216384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.216499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.216514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.216680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.216695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.216799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.216814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.216972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.216987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.217103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.217118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.217217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.217232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.217380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.217393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.217487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.217501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.217605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.217619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.217837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.217851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.218069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.218084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.218326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.218357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.218488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.218520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.218738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.218769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.219030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.219064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.219269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.219301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.219507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.616 [2024-12-09 05:20:48.219521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.616 qpair failed and we were unable to recover it. 00:26:11.616 [2024-12-09 05:20:48.219618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.219632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.219783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.219798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.219894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.219908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.220072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.220087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.220248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.220263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.220501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.220532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.220735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.220766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.220965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.220996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.221154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.221185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.221302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.221317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.221476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.221493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.221660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.221674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.221916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.221931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.222079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.222095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.222314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.222328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.222506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.222520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.222603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.222617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.222849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.222887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.223093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.223106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.223217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.223227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.223436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.223446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.223586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.223596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.223690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.223700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.223844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.223855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.223933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.223942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.224150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.224161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.224247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.224256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.224429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.224439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.224612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.224643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.224791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.224824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.225044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.225079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.225300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.225310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.225448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.225458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.225600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.225610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.225760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.225771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.225852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.225862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.225951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.225960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.226119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.226130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.226282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.226292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.226489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.226520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.226665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.226697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.226841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.226873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.227024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.227056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.227246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.227257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.227472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.227504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.227641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.227672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.227876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.227908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.228160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.228194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.228334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.228365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.228580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.228612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.228811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.228849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.229134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.229167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.229342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.229374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.229575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.229607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.229793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.229824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.230020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.230053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.230202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.230234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.230430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.230440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.230616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.230647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.230858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.230890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.231078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.231111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.231294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.231305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.231465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.231490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.617 [2024-12-09 05:20:48.231610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.617 [2024-12-09 05:20:48.231642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.617 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.231807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.231839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.232092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.232125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.232381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.232413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.232662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.232673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.232860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.232870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.233030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.233041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.233183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.233194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.233291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.233300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.233399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.233409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.233637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.233647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.233752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.233762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.233919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.233930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.234167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.234178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.234273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.234284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.234398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.234409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.234513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.234523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.234673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.234684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.234778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.234788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.234928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.234939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.235024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.235034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.235121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.235130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.235272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.235283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.235444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.235454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.235549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.235559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.235778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.235788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.235962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.235973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.236142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.236155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.236308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.236319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.236475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.236486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.236647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.236658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.618 [2024-12-09 05:20:48.236904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.618 [2024-12-09 05:20:48.236914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.618 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.237083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.237094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.237188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.237197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.237260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.237270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.237420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.237431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.237590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.237600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.237781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.237791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.237919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.237930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.238026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.238035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.238222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.238232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.238388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.238398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.238498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.238508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.238715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.238726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.238885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.238895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.239119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.239130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.239219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.239228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.239409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.239420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.239602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.239612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.239785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.239795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.240010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.240021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.240174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.240184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.240273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.240282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.240429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.240439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.240546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.240556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.240655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.240665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.240808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.240819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.240977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.240987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.241170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.241180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.241338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.241348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.241517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.241527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.241621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.241630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.241716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.241726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.906 [2024-12-09 05:20:48.241829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.906 [2024-12-09 05:20:48.241840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.906 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.242003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.242013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.242233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.242244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.242389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.242400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.242555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.242567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.242726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.242737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.242822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.242831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.242979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.242989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.243073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.243084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.243166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.243176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.243284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.243293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.243375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.243384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.243543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.243553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.243701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.243712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.243795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.243804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.243949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.243959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.244097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.244108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.244194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.244203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.244298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.244309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.244463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.244473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.244560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.244570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.244746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.244756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.244912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.244923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.245068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.245079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.245223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.245233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.245397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.245408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.245617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.245627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.245770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.245781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.245933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.245944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.246176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.246188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.246399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.246409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.246515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.246526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.246733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.246743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.246920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.907 [2024-12-09 05:20:48.246930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.907 qpair failed and we were unable to recover it. 00:26:11.907 [2024-12-09 05:20:48.247102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.247114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.247209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.247219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.247372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.247416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.247565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.247597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.247737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.247769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.247958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.247989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.248280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.248312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.248494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.248504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.248716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.248746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.248943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.248974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.249208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.249246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.249461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.249493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.249636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.249666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.249801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.249833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.250026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.250059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.250260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.250291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.250538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.250549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.250704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.250714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.250867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.250877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.251037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.251048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.251212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.251222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.251376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.251419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.251554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.251586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.251707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.251738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.251970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.252011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.252147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.252178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.252316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.252349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.252551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.252582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.252766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.252797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.253071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.253104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.253310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.253320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.253485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.253517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.253724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.253755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.908 qpair failed and we were unable to recover it. 00:26:11.908 [2024-12-09 05:20:48.253942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.908 [2024-12-09 05:20:48.253974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.254169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.254201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.254341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.254372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.254640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.254671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.254873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.254906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.255105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.255138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.255280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.255290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.255372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.255381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.255559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.255569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.255712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.255742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.255949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.255980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.256196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.256228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.256484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.256494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.256639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.256649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.256811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.256821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.257031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.257063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.257269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.257300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.257441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.257478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.257674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.257705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.257913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.257944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.258187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.258220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.258469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.258500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.258623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.258633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.258815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.258825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.259018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.259050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.259243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.259274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.259478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.259488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.259575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.259584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.259730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.259741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.259881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.259891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.260044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.260055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.260236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.260267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.260474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.260505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.260638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.260669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.260894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.909 [2024-12-09 05:20:48.260925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.909 qpair failed and we were unable to recover it. 00:26:11.909 [2024-12-09 05:20:48.261210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.261243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.261431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.261462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.261680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.261712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.261938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.261970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.262177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.262208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.262393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.262424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.262526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.262537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.262683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.262694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.262792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.262802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.262980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.262991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.263148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.263158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.263260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.263269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.263424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.263434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.263580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.263590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.263755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.263765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.263913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.263923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.264084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.264095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.264172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.264181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.264284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.264294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.264450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.264461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.264614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.264624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.264796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.264807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.264893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.264904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.265012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.265022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.265109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.265118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.265199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.265209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.265344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.265355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.265443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.265453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.265643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.265653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.265743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.265753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.265962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.265973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.266077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.266087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.910 qpair failed and we were unable to recover it. 00:26:11.910 [2024-12-09 05:20:48.266178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.910 [2024-12-09 05:20:48.266188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.266290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.266299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.266511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.266522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.266694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.266705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.266862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.266872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.267025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.267036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.267280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.267291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.267392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.267402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.267494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.267503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.267644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.267654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.267794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.267804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.267946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.267956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.268138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.268149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.268311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.268321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.268462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.268473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.268681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.268691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.268792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.268802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.268908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.268919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.269062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.269073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.269240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.269251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.269416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.269427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.269591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.269618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.269804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.269835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.269958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.269989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.270149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.270182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.270480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.270512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.270709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.270719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.270815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.270824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.270967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.270977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.911 qpair failed and we were unable to recover it. 00:26:11.911 [2024-12-09 05:20:48.271121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.911 [2024-12-09 05:20:48.271132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.271225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.271236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.271374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.271384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.271527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.271537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.271625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.271634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.271844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.271853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.272012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.272021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.272231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.272239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.272384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.272393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.272532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.272541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.272779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.272788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.272934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.272944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.273031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.273041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.273216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.273225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.273408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.273417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.273578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.273587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.273769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.273778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.274018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.274028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.274181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.274191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.274325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.274334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.274473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.274483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.274626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.274636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.274718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.274727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.274975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.274985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.275208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.275219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.275370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.275379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.275552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.275561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.275709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.275719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.275973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.276010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.276190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.276206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.276376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.276390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.276663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.276677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.276879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.276895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.912 [2024-12-09 05:20:48.277009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.912 [2024-12-09 05:20:48.277024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.912 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.277242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.277256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.277424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.277437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.277541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.277556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.277722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.277736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.277951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.277964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.278135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.278152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.278315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.278327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.278418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.278431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.278536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.278547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.278693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.278705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.278787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.278796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.278976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.278987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.279143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.279155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.279307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.279318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.279425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.279435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.279535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.279545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.279774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.279785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.279889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.279899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.280083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.280094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.280238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.280248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.280400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.280410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.280499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.280508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.280777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.280789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.280967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.280978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.281164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.281175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.281343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.281353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.281508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.281519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.281682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.281692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.281822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.281832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.281985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.281996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.282144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.282154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.282245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.282254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.913 [2024-12-09 05:20:48.282405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.913 [2024-12-09 05:20:48.282416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.913 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.282626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.282637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.282746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.282780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.282990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.283029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.283223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.283240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.283345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.283357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.283447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.283458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.283623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.283634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.283911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.283922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.284080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.284092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.284205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.284216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.284303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.284313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.284401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.284413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.284570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.284582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.284777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.284788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.284955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.284969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.285119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.285131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.285288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.285299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.285468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.285479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.285621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.285631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.285716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.285727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.285885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.285896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.285972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.285982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.286085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.286096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.286202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.286212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.286355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.286365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.286550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.286561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.286726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.286736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.286842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.286853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.287009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.287020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.287171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.287181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.287285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.287295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.287450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.287460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.287554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.287565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.914 [2024-12-09 05:20:48.287649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.914 [2024-12-09 05:20:48.287660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.914 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.287849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.287859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.287956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.287966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.288108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.288119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.288216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.288226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.288375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.288385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.288524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.288534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.288610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.288621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.288734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.288755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.288870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.288885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.288982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.288997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.289095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.289110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.289268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.289282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.289392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.289407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.289578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.289592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.289676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.289691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.289776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.289790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.289876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.289891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.290065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.290081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.290257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.290271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.290366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.290380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.290469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.290484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.290602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.290618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.290766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.290780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.290870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.290885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.291051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.291066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.291262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.291293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.291498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.291531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.291717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.291748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.291858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.291891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.292144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.292178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.292329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.292361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.292551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.292565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.292655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.292669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.292904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.915 [2024-12-09 05:20:48.292918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.915 qpair failed and we were unable to recover it. 00:26:11.915 [2024-12-09 05:20:48.293083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.293108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.293271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.293285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.293431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.293446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.293550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.293564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.293758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.293773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.293939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.293953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.294141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.294157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.294380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.294394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.294491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.294505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.294600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.294615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.294791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.294805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.294975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.294989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.295092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.295106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.295277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.295291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.295469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.295484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.295576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.295591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.295743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.295757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.295854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.295868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.296093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.296107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.296210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.296224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.296335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.296350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.296468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.296482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.296657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.296673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.296774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.296789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.296960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.296974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.297244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.297259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.297370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.297384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.297477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.297496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.297672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.916 [2024-12-09 05:20:48.297688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.916 qpair failed and we were unable to recover it. 00:26:11.916 [2024-12-09 05:20:48.297882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.297915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.298110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.298145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.298350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.298381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.298571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.298587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.298698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.298714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.298882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.298897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.298984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.299003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.299120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.299135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.299287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.299301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.299432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.299447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.299629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.299641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.299752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.299763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.299922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.299933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.300102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.300135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.300331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.300364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.300620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.300653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.300765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.300775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.300881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.300893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.301035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.301047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.301189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.301200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.301354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.301364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.301460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.301470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.301627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.301638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.301725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.301734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.301810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.301821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.301989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.302005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.302108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.302119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.302237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.302247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.302417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.302428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.302594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.302605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.302696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.302706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.302869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.302881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.302975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.302985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.917 qpair failed and we were unable to recover it. 00:26:11.917 [2024-12-09 05:20:48.303077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.917 [2024-12-09 05:20:48.303087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.303172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.303183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.303338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.303349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.303438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.303448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.303604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.303615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.303717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.303728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.303815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.303825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.303902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.303912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.304018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.304028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.304198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.304210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.304313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.304325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.304541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.304551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.304696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.304706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.304814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.304825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.305011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.305023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.305178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.305189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.305266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.305275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.305375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.305385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.305478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.305489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.305648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.305659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.305742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.305752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.305930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.305941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.306040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.306050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.306127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.306137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.306346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.306356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.306460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.306471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.306632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.306643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.306806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.306817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.306893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.306902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.306989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.307004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.307091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.307101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.307197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.307209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.307300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.307312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.307404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.918 [2024-12-09 05:20:48.307415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.918 qpair failed and we were unable to recover it. 00:26:11.918 [2024-12-09 05:20:48.307553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.307563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.307769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.307780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.307989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.308003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.308096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.308106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.308204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.308214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.308389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.308399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.308555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.308566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.308775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.308785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.308884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.308894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.308969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.308981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.309141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.309152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.309252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.309263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.309343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.309353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.309585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.309596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.309684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.309695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.309850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.309861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.310021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.310032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.310119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.310130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.310290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.310300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.310386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.310397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.310503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.310514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.310723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.310734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.310888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.310898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.311060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.311072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.311168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.311179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.311351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.311363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.311471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.311483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.311652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.311684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.311817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.311850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.311990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.312030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.312220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.312253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.312420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.312430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.312641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.312673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.312813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.919 [2024-12-09 05:20:48.312846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.919 qpair failed and we were unable to recover it. 00:26:11.919 [2024-12-09 05:20:48.313103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.313136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.313343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.313375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.313577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.313609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.313880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.313891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.314033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.314047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.314153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.314163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.314317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.314328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.314425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.314435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.314594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.314605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.314788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.314799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.314939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.314949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.315058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.315069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.315154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.315165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.315240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.315249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.315390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.315402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.315502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.315512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.315656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.315667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.315804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.315815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.315892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.315903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.316055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.316066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.316240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.316251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.316338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.316348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.316575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.316586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.316675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.316685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.316838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.316848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.317007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.317018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.317162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.317172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.317320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.317331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.317489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.317500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.317710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.317721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.317809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.317820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.317976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.317987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.318140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.318152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.920 [2024-12-09 05:20:48.318314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.920 [2024-12-09 05:20:48.318325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.920 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.318482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.318493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.318579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.318590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.318729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.318739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.318831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.318841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.318928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.318938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.319093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.319104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.319245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.319255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.319386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.319396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.319566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.319577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.319676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.319687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.319840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.319853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.320017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.320029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.320107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.320117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.320277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.320289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.320366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.320376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.320524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.320534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.320664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.320674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.320771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.320781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.320878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.320888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.320990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.321005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.321061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.321071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.321222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.321232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.321290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.321299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.321479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.321489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.321707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.321717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.321861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.321872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.322132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.322143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.322336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.322346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.921 qpair failed and we were unable to recover it. 00:26:11.921 [2024-12-09 05:20:48.322435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.921 [2024-12-09 05:20:48.322446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.322612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.322622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.322715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.322725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.322963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.322973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.323185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.323195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.323290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.323300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.323468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.323479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.323633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.323645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.323737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.323748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.323855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.323865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.323965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.323975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.324123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.324134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.324227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.324237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.324315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.324324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.324426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.324437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.324536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.324546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.324630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.324641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.324803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.324814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.324965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.324975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.325125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.325136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.325234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.325245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.325330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.325340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.325491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.325503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.325590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.325600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.325751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.325762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.325846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.325856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.326007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.326017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.326115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.326125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.326266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.326276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.326365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.326375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.326518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.326529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.326735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.326746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.326822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.326832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.326933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.326943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.327046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.922 [2024-12-09 05:20:48.327057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.922 qpair failed and we were unable to recover it. 00:26:11.922 [2024-12-09 05:20:48.327206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.327217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.327377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.327388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.327533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.327543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.327702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.327712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.327854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.327864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.328011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.328023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.328177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.328189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.328344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.328355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.328623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.328655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.328791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.328822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.328955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.328986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.329125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.329156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.329389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.329420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.329626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.329636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.329812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.329856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.330123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.330157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.330352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.330396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.330562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.330576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.330752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.330784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.330936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.330969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.331281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.331315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.331441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.331451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.331599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.331610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.331845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.331857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.332033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.332044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.332206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.332217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.332307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.332321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.332475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.332487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.332709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.332719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.332940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.332950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.333128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.333138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.333309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.333320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.333485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.333495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.333633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.923 [2024-12-09 05:20:48.333644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.923 qpair failed and we were unable to recover it. 00:26:11.923 [2024-12-09 05:20:48.333745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.333755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.333895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.333905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.334049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.334060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.334278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.334290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.334447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.334459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.334553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.334563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.334733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.334743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.334817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.334826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.334972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.334984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.335151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.335162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.335355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.335366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.335467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.335477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.335670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.335680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.335777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.335788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.335880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.335891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.336059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.336071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.336235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.336246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.336392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.336429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.336571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.336605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.336792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.336823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.337127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.337166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.337432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.337465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.337592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.337625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.337768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.337801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.338081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.338116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.338238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.338270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.338476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.338519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.338688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.338703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.338932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.338964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.339229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.339264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.339379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.339411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.339671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.339705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.339828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.924 [2024-12-09 05:20:48.339861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.924 qpair failed and we were unable to recover it. 00:26:11.924 [2024-12-09 05:20:48.340054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.340099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.340305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.340338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.340542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.340574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.340826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.340841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.340994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.341015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.341124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.341139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.341431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.341446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.341602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.341616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.341853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.341866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.342035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.342047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.342203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.342214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.342378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.342389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.342623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.342633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.342805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.342816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.342887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.342896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.343047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.343059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.343137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.343147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.343302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.343314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.343488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.343499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.343580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.343591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.343754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.343766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.343978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.343990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.344155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.344166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.344255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.344265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.344419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.344430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.344578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.344589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.344805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.344836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.345026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.345062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.345253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.345285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.345520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.345532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.345693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.345704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.345790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.345799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.345943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.345954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.925 [2024-12-09 05:20:48.346040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.925 [2024-12-09 05:20:48.346051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.925 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.346201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.346212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.346279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.346290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.346448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.346460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.346619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.346630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.346773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.346784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.346923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.346934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.347014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.347027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.347125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.347135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.347208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.347218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.347391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.347401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.347635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.347646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.347787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.347798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.348030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.348041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.348199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.348209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.348315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.348326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.348553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.348564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.348665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.348676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.348848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.348859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.349011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.349022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.349177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.349187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.349276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.349285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.349374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.349383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.349474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.349484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.349586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.349596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.349739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.349751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.349839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.349848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.350012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.350024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.350088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.350098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.350268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.350279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.926 qpair failed and we were unable to recover it. 00:26:11.926 [2024-12-09 05:20:48.350394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.926 [2024-12-09 05:20:48.350406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.350498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.350508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.350656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.350666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.350766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.350776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.350879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.350897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.351066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.351081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.351229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.351244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.351341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.351356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.351449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.351464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.351617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.351631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.351724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.351738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.351884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.351899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.352129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.352144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.352296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.352310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.352561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.352575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.352799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.352814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.352920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.352935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.353100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.353119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.353214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.353228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.353326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.353340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.353446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.353460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.353727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.353742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.353920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.353932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.354019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.354030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.354197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.354208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.354439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.354450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.354543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.354552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.354791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.354801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.354905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.354916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.355086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.355097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.355171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.355180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.355266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.355276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.355388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.355398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.927 [2024-12-09 05:20:48.355565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.927 [2024-12-09 05:20:48.355575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.927 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.355659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.355668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.355827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.355838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.355978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.355988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.356077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.356092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.356247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.356262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.356372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.356387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.356553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.356568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.356818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.356833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.357003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.357018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.357114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.357128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.357299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.357316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.357553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.357569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.357735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.357750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.357916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.357931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.358098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.358113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.358281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.358293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.358380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.358390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.358599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.358610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.358697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.358707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.358851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.358861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.359077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.359088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.359244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.359255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.359467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.359478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.359567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.359577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.359792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.359802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.359897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.359908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.360011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.360021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.360114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.360125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.360217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.360228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.360385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.360395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.360477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.360487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.360642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.360653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.360736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.360746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.928 [2024-12-09 05:20:48.360825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.928 [2024-12-09 05:20:48.360835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.928 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.360912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.360924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.360996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.361019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.361110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.361121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.361221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.361232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.361332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.361342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.361427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.361438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.361540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.361551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.361699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.361709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.361800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.361811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.361958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.361969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.362067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.362078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.362182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.362193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.362277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.362288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.362389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.362399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.362505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.362515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.362596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.362607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.362860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.362872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.363127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.363138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.363207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.363217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.363360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.363370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.363597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.363608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.363697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.363707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.363781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.363791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.363873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.363884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.363944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.363954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.364097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.364108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.364263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.364273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.364410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.364420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.364577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.364588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.364694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.364704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.364915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.364925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.365103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.365114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.365348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.365358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.929 qpair failed and we were unable to recover it. 00:26:11.929 [2024-12-09 05:20:48.365444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.929 [2024-12-09 05:20:48.365455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.365564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.365575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.365651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.365661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.365740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.365752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.365888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.365898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.365987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.366000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.366086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.366097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.366245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.366255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.366394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.366404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.366496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.366507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.366667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.366679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.366765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.366775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.366949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.366959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.367052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.367063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.367219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.367230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.367327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.367337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.367431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.367440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.367527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.367536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.367642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.367652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.367746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.367756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.367904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.367914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.368015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.368025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.368168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.368178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.368335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.368347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.368501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.368511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.368723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.368734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.368983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.368993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.369156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.369167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.369258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.369268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.369363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.369375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.369533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.369544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.369687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.369698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.369864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.369873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.369959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.369970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.370127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.930 [2024-12-09 05:20:48.370140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.930 qpair failed and we were unable to recover it. 00:26:11.930 [2024-12-09 05:20:48.370292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.370302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.370450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.370461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.370538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.370548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.370654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.370665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.370758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.370768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.370859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.370869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.371059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.371071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.371219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.371229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.371313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.371323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.371474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.371485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.371627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.371637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.371790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.371800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.371884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.371894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.372059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.372070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.372228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.372240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.372333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.372343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.372509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.372549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.372687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.372719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.372916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.372948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.373093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.373126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.373269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.373302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.373503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.373535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.373813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.373844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.374050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.374082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.374288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.374320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.374523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.374556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.374753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.374764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.931 qpair failed and we were unable to recover it. 00:26:11.931 [2024-12-09 05:20:48.374849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.931 [2024-12-09 05:20:48.374860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.375006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.375019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.375169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.375180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.375344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.375355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.375432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.375443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.375517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.375526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.375616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.375626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.375795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.375806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.375892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.375904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.376016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.376027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.376124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.376136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.376219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.376229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.376323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.376334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.376550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.376561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.376663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.376674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.376763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.376773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.376914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.376925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.377160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.377171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.377270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.377279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.377454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.377464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.377564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.377574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.377647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.377656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.377808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.377818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.377923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.377935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.378179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.378211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.378407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.378438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.378577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.378607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.378817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.378829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.378991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.379036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.379225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.379256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.379389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.379421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.379602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.379612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.379678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.379688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.379852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.932 [2024-12-09 05:20:48.379863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.932 qpair failed and we were unable to recover it. 00:26:11.932 [2024-12-09 05:20:48.379932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.379941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.380118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.380129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.380286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.380297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.380520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.380530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.380620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.380629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.380865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.380876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.381110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.381121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.381218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.381230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.381442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.381454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.381541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.381551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.381716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.381726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.381814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.381825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.381910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.381920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.382056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.382068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.382241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.382251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.382417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.382429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.382584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.382595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.382752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.382762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.382836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.382847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.383028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.383039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.383130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.383141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.383237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.383246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.383329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.383339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.383416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.383426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.383579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.383589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.383764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.383796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.383929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.383961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.384166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.384198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.384407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.384440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.384567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.384577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.384728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.384739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.384835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.384846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.384985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.384996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.385095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.933 [2024-12-09 05:20:48.385106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.933 qpair failed and we were unable to recover it. 00:26:11.933 [2024-12-09 05:20:48.385256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.385268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.385412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.385422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.385633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.385665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.385808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.385838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.386037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.386071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.386220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.386253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.386457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.386489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.386622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.386652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.386798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.386829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.387018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.387052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.387174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.387205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.387360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.387390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.387577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.387608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.387786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.387799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.387949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.387959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.388045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.388055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.388149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.388158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.388365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.388376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.388590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.388600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.388742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.388752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.388990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.389036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.389159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.389190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.389309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.389339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.389615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.389648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.389841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.389871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.389994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.390036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.390248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.390281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.390398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.390430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.390677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.390689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.390857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.390867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.391007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.391019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.391159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.391171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.391333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.391345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.391448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.391458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.391616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.934 [2024-12-09 05:20:48.391628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.934 qpair failed and we were unable to recover it. 00:26:11.934 [2024-12-09 05:20:48.391713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.391723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.391796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.391806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.391907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.391917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.392007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.392018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.392119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.392129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.392304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.392315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.392457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.392467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.392685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.392696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.392859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.392871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.393084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.393095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.393239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.393250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.393343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.393354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.393437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.393447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.393655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.393666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.393742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.393752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.393958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.393969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.394149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.394160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.394331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.394342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.394502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.394515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.394652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.394663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.394734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.394743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.394900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.394911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.395028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.395039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.395126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.395137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.395228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.395240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.395324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.395335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.395478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.395488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.395626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.395637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.395729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.395739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.395822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.395831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.395986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.396007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.396098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.396108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.396191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.935 [2024-12-09 05:20:48.396200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.935 qpair failed and we were unable to recover it. 00:26:11.935 [2024-12-09 05:20:48.396290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.396306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.396486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.396500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.396601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.396613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.396759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.396773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.396992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.397010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.397207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.397220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.397306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.397316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.397416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.397426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.397580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.397591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.397701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.397713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.397794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.397804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.397889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.397899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.397979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.397990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.398148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.398160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.398298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.398309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.398395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.398406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.398536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.398547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.398630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.398640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.398725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.398735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.398892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.398903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.399008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.399019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.399190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.399201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.399333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.399343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.399483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.399494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.399587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.399597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.399761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.399773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.399911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.399922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.400003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.400013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.400079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.936 [2024-12-09 05:20:48.400089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.936 qpair failed and we were unable to recover it. 00:26:11.936 [2024-12-09 05:20:48.400168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.400178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.400402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.400413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.400470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.400481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.400698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.400709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.400971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.400982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.401234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.401245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.401333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.401346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.401433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.401444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.401630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.401640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.401725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.401735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.401831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.401842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.401941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.401952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.402168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.402179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.402253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.402263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.402354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.402365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.402530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.402540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.402711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.402722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.402891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.402902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.402984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.402994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.403155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.403167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.403312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.403324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.403622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.403632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.403845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.403856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.403962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.403973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.404147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.404159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.404371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.404381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.404534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.404545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.404714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.404724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.404794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.404803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.405017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.405028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.405139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.405150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.405248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.405258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.405345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.405354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.937 [2024-12-09 05:20:48.405441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.937 [2024-12-09 05:20:48.405452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.937 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.405527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.405536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.405689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.405699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.405785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.405798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.405891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.405902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.406083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.406094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.406249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.406259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.406435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.406445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.406592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.406603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.406764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.406800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.407020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.407054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.407191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.407223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.407452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.407483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.407628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.407638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.407789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.407800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.407951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.407962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.408120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.408131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.408227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.408237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.408475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.408485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.408587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.408598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.408688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.408698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.408880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.408890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.408982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.408991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.409090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.409099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.409270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.409280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.409347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.409357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.409455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.409465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.409561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.409570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.409734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.409745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.409841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.409851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.409929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.409938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.410018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.410028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.410181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.410192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.410357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.410369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.410443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.938 [2024-12-09 05:20:48.410453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.938 qpair failed and we were unable to recover it. 00:26:11.938 [2024-12-09 05:20:48.410613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.410623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.410707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.410717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.410939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.410950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.411108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.411119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.411282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.411292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.411451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.411463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.411543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.411554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.411635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.411645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.411872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.411885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.411986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.412003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.412151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.412162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.412307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.412318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.412407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.412417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.412490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.412500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.412584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.412594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.412803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.412813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.412892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.412901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.413068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.413081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.413242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.413252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.413341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.413351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.413516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.413527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.413678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.413691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.413763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.413772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.413980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.413991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.414088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.414099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.414193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.414202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.414358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.414370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.414523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.414535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.414677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.414688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.414832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.414843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.415003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.415013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.415168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.415180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.939 [2024-12-09 05:20:48.415270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.939 [2024-12-09 05:20:48.415280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.939 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.415363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.415373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.415529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.415539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.415680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.415692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.415847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.415858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.416000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.416011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.416093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.416103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.416180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.416190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.416273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.416284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.416474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.416486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.416561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.416571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.416676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.416686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.416865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.416876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.416969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.416979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.417078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.417087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.417182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.417191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.417273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.417285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.417372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.417383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.417466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.417476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.417567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.417576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.417720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.417731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.417822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.417832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.417907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.417917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.418075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.418087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.418184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.418195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.418347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.418357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.418452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.418462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.418569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.418580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.418723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.418733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.418810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.418821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.419056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.419067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.419160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.419170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.419325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.419336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.419430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.419441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.940 [2024-12-09 05:20:48.419635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.940 [2024-12-09 05:20:48.419646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.940 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.419735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.419746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.419811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.419820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.419910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.419920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.420194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.420206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.420287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.420296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.420472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.420483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.420634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.420645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.420724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.420734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.420893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.420904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.421068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.421079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.421183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.421193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.421258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.421268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.421338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.421347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.421419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.421429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.421612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.421623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.421727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.421737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.421831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.421841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.421993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.422014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.422165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.422175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.422260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.422271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.422507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.422518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.422661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.422674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.422733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.422743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.422895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.422906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.423052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.423064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.423206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.423217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.423305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.423315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.423392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.423403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.423554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.423565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.423708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.423718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.423902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.423913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.424021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.424032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.941 [2024-12-09 05:20:48.424170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.941 [2024-12-09 05:20:48.424180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.941 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.424320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.424330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.424423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.424434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.424692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.424702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.424849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.424874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.425082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.425114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.425232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.425265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.425407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.425438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.425668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.425701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.425956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.425987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.426204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.426240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.426344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.426361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.426466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.426481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.426663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.426678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.426857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.426871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.427058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.427074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.427171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.427207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.427312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.427329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.427432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.427447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.427528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.427542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.427716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.427729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.427839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.427855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.428014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.428029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.428168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.428182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.428369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.428383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.428534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.428549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.428647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.428661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.428804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.428819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.428927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.428941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.429105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.429124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.429230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.942 [2024-12-09 05:20:48.429244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.942 qpair failed and we were unable to recover it. 00:26:11.942 [2024-12-09 05:20:48.429432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.429447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.429539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.429553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.429646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.429660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.429824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.429855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.429969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.430014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.430265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.430299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.430574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.430606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.430735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.430767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.431017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.431032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.431222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.431236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.431475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.431489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.431606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.431620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.431782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.431797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.431918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.431933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.432084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.432099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.432264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.432278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.432441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.432456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.432561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.432577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.432665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.432678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.432921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.432935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.433093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.433108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.433331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.433345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.433542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.433556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.433674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.433689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.433791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.433805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.434015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.434032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.434128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.434143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.434233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.434247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.434416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.434431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.434543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.434556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.434794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.434808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.435019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.435034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.435213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.435227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.435314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.943 [2024-12-09 05:20:48.435329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.943 qpair failed and we were unable to recover it. 00:26:11.943 [2024-12-09 05:20:48.435506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.435520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.435680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.435693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.435844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.435859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.436077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.436092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.436253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.436267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.436534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.436549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.436646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.436660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.436812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.436826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.436937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.436950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.437105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.437120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.437273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.437287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.437467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.437482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.437576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.437591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.437750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.437764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.437865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.437879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.438127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.438143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.438327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.438358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.438499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.438530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.438702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.438734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.438921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.438937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.439023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.439037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.439144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.439158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.439319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.439333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.439482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.439496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.439700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.439714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.439866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.439881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.440047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.440062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.440165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.440179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.440269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.440283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.440441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.440456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.440541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.440556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.440704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.440721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.440867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.944 [2024-12-09 05:20:48.440883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.944 qpair failed and we were unable to recover it. 00:26:11.944 [2024-12-09 05:20:48.441109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.441124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.441314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.441329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.441491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.441508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.441599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.441614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.441769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.441783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.441881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.441897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.441981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.441994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.442097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.442112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.442270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.442285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.442443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.442458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.442698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.442714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.442881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.442896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.443054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.443070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.443341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.443372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.443654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.443688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.443889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.443920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.444230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.444264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.444466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.444498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.444749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.444781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.445077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.445092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.445191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.445206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.445317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.445332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.445584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.445598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.445707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.445722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.445873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.445888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.446114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.446130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.446242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.446256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.446443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.446458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.446615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.446629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.446725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.446741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.446853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.446868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.446957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.446972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.447081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.447097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.447246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.945 [2024-12-09 05:20:48.447262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.945 qpair failed and we were unable to recover it. 00:26:11.945 [2024-12-09 05:20:48.447446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.447463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.447561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.447577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.447754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.447768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.447871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.447884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.448069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.448086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.448195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.448208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.448383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.448397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.448564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.448579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.448797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.448812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.448907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.448920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.449160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.449176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.449283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.449298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.449402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.449417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.449578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.449593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.449687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.449702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.449771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.449785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.449887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.449901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.450076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.450092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.450338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.450355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.450525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.450539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.450635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.450650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.450747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.450763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.450865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.450880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.450985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.451003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.451117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.451132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.451349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.451364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.451458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.451472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.451633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.451679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.451873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.451906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.452138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.452172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.452369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.452400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.452622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.452653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.452837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.452869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.453127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.453160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.946 qpair failed and we were unable to recover it. 00:26:11.946 [2024-12-09 05:20:48.453345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.946 [2024-12-09 05:20:48.453376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.453573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.453606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.453839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.453879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.454046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.454063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.454173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.454188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.454354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.454368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.454527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.454543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.454641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.454656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.454759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.454775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.454925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.454940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.455034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.455053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.455222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.455237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.455412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.455427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.455593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.455607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.455691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.455704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.455800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.455816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.455990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.456010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.456171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.456185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.456281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.456296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.456461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.456475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.456576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.456591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.456745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.456759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.456925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.456940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.457125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.457140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.457315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.457329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.457434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.457449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.457540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.457556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.457701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.457715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.457866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.947 [2024-12-09 05:20:48.457881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.947 qpair failed and we were unable to recover it. 00:26:11.947 [2024-12-09 05:20:48.457977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.457992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.458129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.458145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.458252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.458266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.458382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.458396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.458567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.458581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.458749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.458763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.458862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.458878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.459032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.459046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.459270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.459285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.459388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.459402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.459584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.459598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.459794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.459809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.460056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.460071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.460310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.460325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.460476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.460491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.460575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.460590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.460686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.460700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.460898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.460913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.461023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.461038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.461140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.461155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.461329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.461345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.461525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.461543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.461645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.461662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.461841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.461856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.462014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.462029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.462189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.462203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.462285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.462300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.462461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.462476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.462625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.462640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.462734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.462748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.462903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.462918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.463100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.463116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.948 [2024-12-09 05:20:48.463216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.948 [2024-12-09 05:20:48.463230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.948 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.463423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.463438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.463524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.463539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.463642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.463658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.463753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.463767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.463879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.463893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.464062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.464080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.464165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.464180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.464280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.464295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.464443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.464457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.464633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.464648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.464875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.464889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.464972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.464986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.465094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.465109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.465273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.465288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.465443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.465458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.465576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.465590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.465698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.465713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.465805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.465819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.465975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.465989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.466083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.466097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.466181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.466197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.466300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.466314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.466421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.466436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.466622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.466637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.466813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.466829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.466935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.466949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.467122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.467138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.467226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.467241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.467459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.467476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.467626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.467640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.467743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.467757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.467858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.467872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.468038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.468053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.468228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.949 [2024-12-09 05:20:48.468243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.949 qpair failed and we were unable to recover it. 00:26:11.949 [2024-12-09 05:20:48.468332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.468346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.468512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.468527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.468681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.468695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.468794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.468809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.468926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.468941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.469027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.469042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.469185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.469199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.469371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.469385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.469523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.469537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.469777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.469791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.469897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.469911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.469994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.470015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.470157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.470172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.470283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.470298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.470394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.470408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.470507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.470521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.470606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.470620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.470807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.470822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.470980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.470996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.471149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.471165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.471340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.471355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.471443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.471457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.471555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.471570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.471685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.471701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.471887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.471903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.472070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.472086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.472188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.472204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.472352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.472367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.472532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.472546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.472671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.472684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.472795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.472809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.472977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.472991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.473083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.473098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.473220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.473234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.473409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.473426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.473578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.473592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.473764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.473778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.950 qpair failed and we were unable to recover it. 00:26:11.950 [2024-12-09 05:20:48.473947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.950 [2024-12-09 05:20:48.473962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.474132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.474147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.474253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.474268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.474416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.474431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.474549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.474564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.474717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.474731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.474843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.474859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.475060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.475095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.475296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.475329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.475539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.475570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.475869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.475901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.476034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.476051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.476132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.476146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.476339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.476353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.476472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.476486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.476678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.476692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.476786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.476801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.477022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.477037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.477140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.477154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.477322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.477337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.477490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.477504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.477662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.477676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.477764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.477778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.477918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.477932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.478086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.478102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.478187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.478202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.478388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.478404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.478584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.478599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.478771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.478803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.478987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.479051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.479267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.479298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.479452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.479467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.479617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.479632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.479849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.479864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.480046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.480061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.480209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.480224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.480389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.951 [2024-12-09 05:20:48.480404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.951 qpair failed and we were unable to recover it. 00:26:11.951 [2024-12-09 05:20:48.480505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.480522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.480687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.480702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.480925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.480940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.481155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.481172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.481276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.481290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.481372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.481386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.481478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.481492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.481742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.481757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.481846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.481860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.481970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.481985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.482092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.482105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.482220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.482238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.482348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.482362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.482598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.482613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.482835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.482851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.482942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.482958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.483115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.483130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.483225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.483240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.483345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.483360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.483437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.483450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.483604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.483618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.483782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.483797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.483917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.483932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.484036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.484051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.484138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.484152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.484240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.484254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.484396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.484410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.484524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.484539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.484631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.484646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.484810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.484825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.484974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.484990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.485163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.485179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.485294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.952 [2024-12-09 05:20:48.485310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.952 qpair failed and we were unable to recover it. 00:26:11.952 [2024-12-09 05:20:48.485404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.485417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.485592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.485606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.485720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.485734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.485924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.485938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.486048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.486063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.486232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.486247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.486404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.486420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.486519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.486537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.486780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.486795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.486876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.486889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.486992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.487022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.487208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.487223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.487312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.487327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.487515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.487529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.487723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.487738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.487816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.487831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.487918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.487932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.488089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.488106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.488196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.488209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.488324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.488339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.488436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.488450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.488619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.488633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.488722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.488737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.488834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.488847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.489005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.489021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.489183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.489198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.489359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.489374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.489549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.489563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.489730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.489747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.489914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.489929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.490009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.490023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.490094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.490108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.490227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.490241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.490331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.490346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.490538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.490554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.490719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.490733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.490954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.490969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.953 [2024-12-09 05:20:48.491073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.953 [2024-12-09 05:20:48.491089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.953 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.491169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.491183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.491268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.491284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.491393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.491407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.491501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.491516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.491703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.491718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.491887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.491901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.492059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.492076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.492163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.492176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.492273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.492287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.492382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.492399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.492496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.492512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.492733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.492747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.492821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.492834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.492905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.492918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.493072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.493088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.493185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.493199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.493286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.493301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.493454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.493469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.493556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.493569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.493659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.493675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.493758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.493774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.493938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.493952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.494123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.494139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.494247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.494261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.494355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.494372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.494479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.494493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.494665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.494680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.494776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.494791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.494899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.494914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.495058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.495075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.495234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.495248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.495415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.495431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.495504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.495517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.495615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.495630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.495731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.495745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.495837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.495852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.496009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.496024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.496194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.496210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.496374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.496388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.496482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.954 [2024-12-09 05:20:48.496497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.954 qpair failed and we were unable to recover it. 00:26:11.954 [2024-12-09 05:20:48.496598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.496613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.496711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.496726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.496876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.496892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.497060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.497087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.497172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.497184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.497255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.497265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.497364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.497375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.497522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.497533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.497624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.497636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.497734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.497748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.497882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.497893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.498065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.498077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.498162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.498172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.498237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.498246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.498347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.498357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.498461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.498472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.498616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.498629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.498772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.498782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.498862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.498873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.498956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.498966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.499109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.499120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.499264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.499276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.499345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.499355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.499447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.499458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.499531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.499542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.499634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.499645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.499773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.499783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.499857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.499867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.500015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.500025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.500128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.500139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.500222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.500234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.500314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.500324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.500419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.500430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.500580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.500591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.500759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.500769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.500916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.500926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.501033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.501044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.501154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.501165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.955 qpair failed and we were unable to recover it. 00:26:11.955 [2024-12-09 05:20:48.501324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.955 [2024-12-09 05:20:48.501335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.501421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.501431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.501526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.501538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.501723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.501734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.501832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.501844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.501942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.501954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.502052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.502063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.502141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.502151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.502243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.502254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.502399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.502410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.502495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.502506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.502675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.502688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.502785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.502795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.503009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.503020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.503094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.503104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.503205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.503217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.503292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.503303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.503474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.503485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.503659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.503670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.503836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.503846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.503927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.503937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.504033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.504044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.504134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.504146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.504228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.504239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.504379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.504390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.504629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.504640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.504790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.504802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.504888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.504898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.505072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.505083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.505223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.505235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.505325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.505336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.505425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.505437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.505651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.505663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.505902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.505913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.506056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.506067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.506164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.506175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.506279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.956 [2024-12-09 05:20:48.506290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.956 qpair failed and we were unable to recover it. 00:26:11.956 [2024-12-09 05:20:48.506443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.506455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.506540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.506552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.506639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.506649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.506741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.506752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.506934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.506945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.507049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.507060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.507136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.507147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.507329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.507339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.507425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.507437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.507537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.507548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.507725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.507735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.507823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.507835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.507914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.507925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.508005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.508016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.508156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.508169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.508318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.508328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.508486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.508497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.508574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.508585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.508735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.508746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.508903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.508915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.509044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.509062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.509162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.509173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.509381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.509392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.509564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.509575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.509685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.509697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.509774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.509784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.509868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.509879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.510039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.510051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.510144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.510154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.510308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.510319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.510461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.510471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.510642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.510652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.510737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.510747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.510892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.510904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.511046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.511057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.511222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.511232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.511345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.511356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.511521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.511532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.511618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.511628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.511728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.511740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.511879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.511891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.512037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.512048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.512129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.512140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.512226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.512238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.512402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.512413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.512520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.512531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.512624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.512635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.512716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.512727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.512902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.512914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.957 qpair failed and we were unable to recover it. 00:26:11.957 [2024-12-09 05:20:48.512996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.957 [2024-12-09 05:20:48.513011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.513106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.513117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.513191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.513202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.513294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.513305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.513386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.513397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.513494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.513504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.513683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.513695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.513793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.513803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.513871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.513881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.513957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.513968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.514040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.514050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.514141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.514151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.514246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.514257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.514332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.514342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.514436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.514448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.514605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.514616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.514709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.514720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.514881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.514892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.514982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.514993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.515096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.515108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.515191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.515201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.515354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.515366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.515456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.515467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.515568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.515578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.515667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.515677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.515749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.515760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.515863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.515875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.515964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.515974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.516121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.516132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.516286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.516297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.516395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.516405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.516548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.516560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.516718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.516731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.516814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.516825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.517017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.517028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.517111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.517123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.517203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.517214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.517302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.517313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.517397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.517408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.517488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.517498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.517653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.517664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.517747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.517758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.517972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.517984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.518068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.518078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.518157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.518168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.518323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.518334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.518418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.518429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.518529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.518539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.518646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.518658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.518806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.518818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.519027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.519039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.519251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.519262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.519405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.519415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.519567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.519579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.519660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.519671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.519903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.519915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.520091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.520102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.520267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.520279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.520440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.520451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.520549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.520561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.520654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.520665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.520771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.520783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:11.958 [2024-12-09 05:20:48.520941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.958 [2024-12-09 05:20:48.520953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:11.958 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.521153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.244 [2024-12-09 05:20:48.521165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.244 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.521309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.244 [2024-12-09 05:20:48.521319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.244 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.521410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.244 [2024-12-09 05:20:48.521422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.244 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.521523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.244 [2024-12-09 05:20:48.521534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.244 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.521613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.244 [2024-12-09 05:20:48.521624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.244 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.521715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.244 [2024-12-09 05:20:48.521726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.244 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.521899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.244 [2024-12-09 05:20:48.521910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.244 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.522016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.244 [2024-12-09 05:20:48.522028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.244 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.522172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.244 [2024-12-09 05:20:48.522183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.244 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.522343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.244 [2024-12-09 05:20:48.522356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.244 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.522447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.244 [2024-12-09 05:20:48.522458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.244 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.522614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.244 [2024-12-09 05:20:48.522641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.244 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.522797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.244 [2024-12-09 05:20:48.522808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.244 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.522931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.244 [2024-12-09 05:20:48.522941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.244 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.523050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.244 [2024-12-09 05:20:48.523062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.244 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.523229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.244 [2024-12-09 05:20:48.523240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.244 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.523387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.244 [2024-12-09 05:20:48.523399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.244 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.523495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.244 [2024-12-09 05:20:48.523507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.244 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.523592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.244 [2024-12-09 05:20:48.523603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.244 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.523709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.244 [2024-12-09 05:20:48.523720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.244 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.523817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.244 [2024-12-09 05:20:48.523828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.244 qpair failed and we were unable to recover it. 00:26:12.244 [2024-12-09 05:20:48.523918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.523930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.524008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.524020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.524097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.524108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.524229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.524240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.524387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.524398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.524496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.524507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.524724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.524735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.524886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.524897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.524971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.524982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.525095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.525107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.525206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.525217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.525312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.525324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.525464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.525476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.525585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.525596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.525720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.525732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.525821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.525834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.525992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.526007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.526083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.526094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.526253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.526264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.526359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.526370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.526461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.526473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.526645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.526656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.526737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.526748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.526964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.526976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.527144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.527156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.527265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.527276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.527425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.527436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.527531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.527541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.527751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.527764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.527918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.527929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.528086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.528097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.528184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.528212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.528288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.528298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.528392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.528402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.528550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.528561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.528701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.528712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.528857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.245 [2024-12-09 05:20:48.528868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.245 qpair failed and we were unable to recover it. 00:26:12.245 [2024-12-09 05:20:48.528958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.528969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.529059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.529070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.529215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.529225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.529322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.529334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.529432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.529448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.529666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.529678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.529765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.529775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.529866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.529878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.529967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.529977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.530060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.530071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.530163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.530175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.530335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.530347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.530511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.530522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.530606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.530617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.530776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.530789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.531007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.531019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.531122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.531134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.531214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.531225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.531385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.531396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.531496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.531507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.531608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.531619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.531698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.531709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.531816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.531826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.531996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.532013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.532263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.532274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.532440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.532453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.532540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.532552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.532767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.532778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.532868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.532880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.533032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.533045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.533212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.533224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.533336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.533351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.533514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.533525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.533714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.533726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.533820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.533831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.534038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.534051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.534139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.534150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.534223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.246 [2024-12-09 05:20:48.534235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.246 qpair failed and we were unable to recover it. 00:26:12.246 [2024-12-09 05:20:48.534330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.534341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.534563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.534574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.534669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.534680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.534823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.534835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.534919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.534929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.535072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.535083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.535250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.535261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.535366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.535377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.535519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.535529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.535686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.535698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.535841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.535853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.535949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.535961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.536043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.536055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.536218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.536230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.536327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.536338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.536478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.536489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.536586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.536597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.536683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.536695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.536853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.536864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.536953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.536965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.537122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.537134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.537230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.537241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.537321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.537333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.537474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.537485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.537587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.537598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.537752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.537763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.537915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.537926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.538028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.538039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.538198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.538209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.538361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.538373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.538512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.538523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.538681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.538694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.538784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.538794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.538952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.538965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.539105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.539117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.539212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.539223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.539309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.539321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.247 [2024-12-09 05:20:48.539461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.247 [2024-12-09 05:20:48.539472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.247 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.539557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.539569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.539765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.539776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.539869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.539882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.540069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.540080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.540233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.540245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.540408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.540419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.540580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.540592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.540673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.540684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.540835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.540847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.540950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.540961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.541118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.541130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.541211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.541222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.541312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.541323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.541466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.541478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.541570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.541581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.541675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.541686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.541773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.541784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.541867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.541878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.541959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.541970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.542048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.542061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.542220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.542231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.542324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.542335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.542423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.542434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.542572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.542584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.248 [2024-12-09 05:20:48.542673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.248 [2024-12-09 05:20:48.542685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.248 qpair failed and we were unable to recover it. 00:26:12.305 [2024-12-09 05:20:48.542828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.305 [2024-12-09 05:20:48.542839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.305 qpair failed and we were unable to recover it. 00:26:12.305 [2024-12-09 05:20:48.542925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.305 [2024-12-09 05:20:48.542936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.305 qpair failed and we were unable to recover it. 00:26:12.305 [2024-12-09 05:20:48.543167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.305 [2024-12-09 05:20:48.543179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.305 qpair failed and we were unable to recover it. 00:26:12.305 [2024-12-09 05:20:48.543247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.305 [2024-12-09 05:20:48.543257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.305 qpair failed and we were unable to recover it. 00:26:12.305 [2024-12-09 05:20:48.543334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.305 [2024-12-09 05:20:48.543347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.305 qpair failed and we were unable to recover it. 00:26:12.305 [2024-12-09 05:20:48.543475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.305 [2024-12-09 05:20:48.543485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.305 qpair failed and we were unable to recover it. 00:26:12.305 [2024-12-09 05:20:48.543570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.305 [2024-12-09 05:20:48.543581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.305 qpair failed and we were unable to recover it. 00:26:12.305 [2024-12-09 05:20:48.543732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.305 [2024-12-09 05:20:48.543743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.305 qpair failed and we were unable to recover it. 00:26:12.305 [2024-12-09 05:20:48.543831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.305 [2024-12-09 05:20:48.543842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.305 qpair failed and we were unable to recover it. 00:26:12.305 [2024-12-09 05:20:48.543928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.305 [2024-12-09 05:20:48.543939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.305 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.544025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.544038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.544189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.544200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.544305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.544317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.544533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.544544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.544783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.544795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.544877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.544889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.544975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.544985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.545088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.545100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.545267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.545279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.545360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.545372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.545572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.545582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.545772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.545783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.545955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.545966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.546119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.546131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.546337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.546349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.546494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.546506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.546692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.546703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.546848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.546859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.546955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.546967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.547116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.547127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.547223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.547235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.547390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.547402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.547582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.547592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.547691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.547702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.547941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.547953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.548252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.548264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.548363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.548374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.548451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.548462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.548643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.306 [2024-12-09 05:20:48.548654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.306 qpair failed and we were unable to recover it. 00:26:12.306 [2024-12-09 05:20:48.548767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.548778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.548962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.548973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.549048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.549057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.549146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.549158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.549265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.549277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.549417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.549427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.549585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.549596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.549689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.549700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.549879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.549890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.549981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.549993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.550070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.550081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.550229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.550244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.550473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.550485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.550562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.550573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.550732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.550743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.550930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.550941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.551033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.551045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.551293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.551304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.551516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.551529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.551675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.551687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.551837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.551848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.551940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.551951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.552160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.552171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.552406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.552417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.552631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.552642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.552742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.552752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.552920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.552931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.553029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.553042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.553303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.553314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.553458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.553468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.553614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.553626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.553721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.553732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.553824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.553835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.554042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.554054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.307 [2024-12-09 05:20:48.554216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.307 [2024-12-09 05:20:48.554228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.307 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.554388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.554398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.554480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.554491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.554595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.554606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.554688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.554699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.554775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.554787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.554862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.554873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.555026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.555037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.555212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.555223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.555371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.555383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.555470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.555482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.555639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.555651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.555727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.555738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.555904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.555916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.556011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.556023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.556239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.556251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.556333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.556343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.556530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.556543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.556686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.556696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.556873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.556886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.557043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.557056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.557244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.557255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.557413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.557424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.557512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.557523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.557673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.557685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.557819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.557832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.558018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.558030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.558113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.558125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.558341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.558353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.558494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.558505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.558660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.558672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.558907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.558919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.559063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.559076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.308 qpair failed and we were unable to recover it. 00:26:12.308 [2024-12-09 05:20:48.559150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.308 [2024-12-09 05:20:48.559160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.559314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.559325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.559417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.559428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.559600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.559612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.559838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.559848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.560006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.560018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.560207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.560217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.560387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.560398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.560575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.560588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.560681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.560692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.560796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.560807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.560910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.560935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.561105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.561122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.561292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.561307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.561461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.561477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.561572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.561586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.561692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.561708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.561814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.561830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.562041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.562057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.562218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.562233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.562390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.562406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.562558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.562572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.562694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.562709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.562861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.562877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.563045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.563065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.563214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.563230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.563471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.563486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.563594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.563608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.563778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.563794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.563898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.563912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.564009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.564025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.564179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.564193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.564305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.564320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.564406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.309 [2024-12-09 05:20:48.564420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.309 qpair failed and we were unable to recover it. 00:26:12.309 [2024-12-09 05:20:48.564508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.564524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.564613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.564628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.564718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.564734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.564881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.564895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.565052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.565067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.565231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.565248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.565484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.565500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.565690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.565705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.565803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.565819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.566012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.566028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.566137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.566152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.566310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.566326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.566415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.566429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.566624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.566638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.566759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.566774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.567017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.567032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.567124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.567139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.567336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.567350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.567429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.567441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.567721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.567733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.567970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.567981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.568145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.568157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.568253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.568263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.568415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.568427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.568600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.568611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.568708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.568719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.568935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.568946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.569039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.569049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.569158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.569169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.569280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.569292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.569551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.569566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.569719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.569746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.569988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.570002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.570092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.570103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.570264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.570276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.570497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.570508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-12-09 05:20:48.570601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.310 [2024-12-09 05:20:48.570613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.570698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.570709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.570803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.570815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.571039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.571052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.571199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.571209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.571365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.571376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.571535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.571546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.571617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.571627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.571745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.571757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.571858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.571869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.572027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.572040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.572138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.572148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.572227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.572238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.572387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.572398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.572551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.572563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.572722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.572733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.572810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.572821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.572981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.572993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.573110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.573122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.573206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.573215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.573301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.573313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.573406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.573423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.573522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.573536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.573694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.573711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.573896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.573911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.574013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.574026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.574185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.574196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.574336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.574347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.574454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.574465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.574616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.574628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.574800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.574811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.574964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.574975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.575203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.311 [2024-12-09 05:20:48.575215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-12-09 05:20:48.575299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.575312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.575399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.575412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.575509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.575520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.575614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.575626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.575757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.575769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.575915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.575927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.576103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.576115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.576194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.576205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.576292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.576303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.576406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.576416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.576491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.576502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.576650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.576662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.576919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.576930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.577079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.577091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.577260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.577272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.577365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.577376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.577526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.577537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.577722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.577734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.577910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.577921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.578073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.578086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.578240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.578252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.578341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.578352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.578500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.578511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.578660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.578671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.578829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.578841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.578936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.578947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.579045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.579058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.579162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.579174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.579338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.579355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.579513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.579528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.579671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.579687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.579853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.579868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.580018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.580033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.580189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.580204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.580322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.312 [2024-12-09 05:20:48.580337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-12-09 05:20:48.580449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.580463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.580585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.580599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.580759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.580773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.580888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.580903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.581053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.581068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.581205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.581220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.581330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.581348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.581565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.581580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.581747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.581763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.581852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.581867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.582035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.582052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.582200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.582215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.582372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.582386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.582535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.582551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.582670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.582686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.582844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.582859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.583032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.583047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.583200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.583215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.583450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.583466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.583628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.583642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.583821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.583835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.583933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.583948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.584031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.584046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.584140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.584156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.584333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.584347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.584566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.584581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.584679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.584694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.584793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.584807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.585002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.585018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.585240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.585255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.585365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.585379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.585535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.585551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.585637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.585652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.585770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.585784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.585887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.585902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.586077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.586094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.586184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.586200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.313 [2024-12-09 05:20:48.586357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.313 [2024-12-09 05:20:48.586373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.313 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.586477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.586492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.586587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.586602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.586688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.586705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.586861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.586875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.587106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.587122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.587232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.587248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.587428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.587443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.587541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.587557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.587645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.587663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.587767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.587783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.588028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.588044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.588159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.588174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.588411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.588427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.588595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.588610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.588813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.588829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.589052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.589067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.589285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.589300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.589464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.589480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.589699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.589714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.589832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.589847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.589950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.589965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.590207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.590222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.590352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.590367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.590584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.590600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.590705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.590721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.590893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.590909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.591089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.591105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.591273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.591288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.591399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.591414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.591515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.591531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.591697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.591712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.591896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.591911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.592030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.592058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.592157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.592170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.592243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.592255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.592400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.592415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.592530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.592542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.314 qpair failed and we were unable to recover it. 00:26:12.314 [2024-12-09 05:20:48.592645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.314 [2024-12-09 05:20:48.592657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.592750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.592761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.592979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.592990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.593153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.593166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.593309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.593320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.593457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.593468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.593679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.593691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.593848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.593859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.593958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.593968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.594203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.594214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.594312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.594323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.594408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.594419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.594569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.594581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.594674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.594685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.594839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.594852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.595013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.595025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.595114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.595125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.595280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.595292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.595385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.595396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.595542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.595553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.595650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.595661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.595738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.595749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.595836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.595847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.596009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.596021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.596177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.596189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.596356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.596371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.596508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.596522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.596644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.596658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.596876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.596891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.597009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.597024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.597213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.597226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.597321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.597333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.597489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.597500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.597639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.315 [2024-12-09 05:20:48.597650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.315 qpair failed and we were unable to recover it. 00:26:12.315 [2024-12-09 05:20:48.597796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.597807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.597908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.597920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.598155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.598168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.598327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.598339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.598507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.598522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.598664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.598675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.598825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.598838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.598963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.598974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.599083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.599095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.599251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.599263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.599415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.599426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.599585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.599597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.599690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.599701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.599818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.599830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.599983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.599995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.600151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.600162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.600306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.600318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.600490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.600503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.600615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.600626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.600710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.600723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.600826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.600837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.600924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.600936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.601019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.601030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.601178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.601189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.601271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.601283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.601366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.601377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.601477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.601489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.601570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.601580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.601728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.601739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.601846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.601857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.602007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.602019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.602102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.602112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.602193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.602204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.602345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.602356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.602481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.602493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.602595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.602606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.602702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.602713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.602864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.316 [2024-12-09 05:20:48.602875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.316 qpair failed and we were unable to recover it. 00:26:12.316 [2024-12-09 05:20:48.603037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.603049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.603184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.603196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.603306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.603317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.603396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.603407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.603495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.603506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.603611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.603622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.603753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.603766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.603873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.603885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.604057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.604069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.604157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.604169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.604244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.604255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.604401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.604413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.604495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.604505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.604669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.604681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.604780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.604791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.605034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.605054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.605212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.605224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.605355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.605367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.605513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.605525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.605629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.605640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.605726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.605738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.605927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.605939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.606028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.606039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.606141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.606153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.606313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.606325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.606456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.606468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.606546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.606556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.606633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.606644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.606861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.606873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.607011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.607022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.607126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.607137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.607252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.607263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.607405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.607416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.607513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.607525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.607599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.607609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.607773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.607784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.607874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.607886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.608041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.608052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.317 qpair failed and we were unable to recover it. 00:26:12.317 [2024-12-09 05:20:48.608161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.317 [2024-12-09 05:20:48.608172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.608267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.608279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.608388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.608398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.608478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.608488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.608574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.608585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.608745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.608756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.608903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.608914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.609062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.609074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.609163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.609177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.609260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.609271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.609422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.609434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.609591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.609602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.609712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.609723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.609809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.609819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.609924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.609936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.610007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.610018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.610111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.610124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.610217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.610229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.610304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.610315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.610396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.610413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.610576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.610588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.610691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.610702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.610920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.610933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.611077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.611088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.611168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.611180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.611416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.611429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.611622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.611634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.611841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.611852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.611956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.611967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.612057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.612070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.612236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.612248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.612427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.612438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.612599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.612611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.612705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.612716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.612933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.612944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.613172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.613185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.613290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.613302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.613399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.318 [2024-12-09 05:20:48.613410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.318 qpair failed and we were unable to recover it. 00:26:12.318 [2024-12-09 05:20:48.613496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.613508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.613670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.613681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.613773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.613785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.613948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.613959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.614126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.614138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.614235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.614248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.614396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.614409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.614503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.614515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.614659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.614672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.614770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.614782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.614894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.614909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.615008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.615020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.615175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.615187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.615275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.615286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.615379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.615391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.615485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.615497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.615651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.615663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.615846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.615858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.616007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.616019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.616126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.616139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.616283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.616295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.616392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.616403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.616496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.616508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.616643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.616655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.616740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.616751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.616868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.616880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.617056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.617068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.617153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.617166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.617257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.617269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.617405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.617417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.617578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.617590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.617678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.617689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.617802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.617814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.617976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.617988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.618069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.618080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.618224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.618237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.618331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.618343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.618440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.618451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.319 qpair failed and we were unable to recover it. 00:26:12.319 [2024-12-09 05:20:48.618537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.319 [2024-12-09 05:20:48.618548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.618788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.618800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.618899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.618912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.619007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.619019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.619100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.619111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.619202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.619213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.619380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.619391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.619468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.619478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.619576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.619588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.619740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.619752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.619898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.619910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.619989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.620005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.620110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.620125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.620211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.620223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.620380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.620391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.620472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.620483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.620637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.620648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.620804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.620815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.620905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.620917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.620981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.620992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.621136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.621147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.621237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.621248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.621336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.621348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.621425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.621437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.621592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.621605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.621706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.621718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.621823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.621834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.622017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.622030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.622123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.622135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.622279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.622291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.622447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.622459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.622555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.622566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.622674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.622686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.622836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.320 [2024-12-09 05:20:48.622849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.320 qpair failed and we were unable to recover it. 00:26:12.320 [2024-12-09 05:20:48.622928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.622939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.623082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.623094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.623198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.623209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.623294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.623306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.623393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.623404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.623573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.623584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.623728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.623739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.623820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.623832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.623989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.624006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.624221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.624233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.624441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.624453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.624550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.624563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.624666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.624677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.624777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.624788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.625010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.625022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.625258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.625269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.625383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.625394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.625481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.625494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.625569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.625583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.625667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.625678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.625767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.625778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.625881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.625893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.626058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.626071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.626233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.626245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.626413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.626426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.626588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.626600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.626821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.626833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.626926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.626937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.627030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.627043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.627144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.627155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.627227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.627245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.627405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.627416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.627580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.627592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.627671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.627681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.627823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.627834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.628049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.628061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.628163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.628176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.628279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.321 [2024-12-09 05:20:48.628291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.321 qpair failed and we were unable to recover it. 00:26:12.321 [2024-12-09 05:20:48.628388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.628400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.628495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.628505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.628739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.628750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.629039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.629052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.629267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.629279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.629451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.629462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.629667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.629678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.629835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.629848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.630023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.630034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.630190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.630201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.630364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.630375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.630540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.630552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.630664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.630676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.630765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.630775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.630861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.630872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.630958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.630969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.631085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.631097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.631177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.631187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.631361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.631372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.631528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.631539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.631625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.631639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.631839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.631850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.631952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.631964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.632121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.632132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.632228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.632238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.632330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.632343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.632486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.632497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.632661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.632673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.632847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.632858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.632933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.632943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.633114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.633126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.633286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.633296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.633391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.633401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.633557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.633568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.633661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.633674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.633832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.633843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.634018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.634030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.634225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.634237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.634336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.634348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.322 [2024-12-09 05:20:48.634453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.322 [2024-12-09 05:20:48.634464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.322 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.634547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.634558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.634725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.634736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.634825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.634837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.634929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.634941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.635036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.635049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.635133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.635145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.635244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.635255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.635472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.635507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.635634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.635651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.635757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.635772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.635938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.635954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.636051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.636068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.636228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.636244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.636443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.636459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.636633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.636648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.636745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.636760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.636910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.636926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.637155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.637171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.637277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.637293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.637410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.637424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.637590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.637610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.637704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.637719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.637858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.637872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.638053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.638068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.638223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.638238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.638445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.638461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.638628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.638642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.638756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.638772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.638875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.638889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.639052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.639068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.639166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.639181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.639333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.639348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.639449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.639463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.639556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.639572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.639786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.639802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.640019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.640035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.640204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.640218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.640306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.640320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.640576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.640591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.640774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.640789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.640951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.323 [2024-12-09 05:20:48.640965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.323 qpair failed and we were unable to recover it. 00:26:12.323 [2024-12-09 05:20:48.641118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.641133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.641355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.641371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.641562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.641576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.641690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.641704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.641804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.641819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.642054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.642070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.642161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.642175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.642282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.642293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.642396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.642409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.642565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.642576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.642794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.642807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.642976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.642989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.643090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.643102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.643191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.643202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.643285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.643297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.643473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.643484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.643590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.643603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.643801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.643813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.643908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.643920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.644089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.644103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.644343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.644354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.644499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.644510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.644604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.644616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.644710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.644722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.644957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.644967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.645046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.645056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.645278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.645290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.645403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.645414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.645605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.645617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.645777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.645788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.646002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.646014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.646257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.646269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.646442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.646453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.646548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.646560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.646661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.646673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.646827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.646838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.646924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.324 [2024-12-09 05:20:48.646935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.324 qpair failed and we were unable to recover it. 00:26:12.324 [2024-12-09 05:20:48.647094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.647107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.647264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.647275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.647419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.647431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.647586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.647597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.647745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.647756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.647842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.647854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.648016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.648028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.648121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.648132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.648231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.648242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.648395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.648413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.648632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.648647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.648742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.648758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.648842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.648857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.648968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.648983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.649116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.649151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.649349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.649370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.649465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.649481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.649554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.649570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.649676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.649692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.649795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.649810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.649901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.649917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.650018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.650035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.650128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.650147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.650237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.650251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.650439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.650454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.650567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.650582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.650663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.650678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.650828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.650843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.650963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.650978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.651088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.651104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.651208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.651223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.651319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.651333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.651572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.651586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.651678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.651694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.651846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.651860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.652028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.652045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.652315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.652331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.652429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.652444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.652595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.652611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.652765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.652780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.653020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.653035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.653182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.653196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.653294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.325 [2024-12-09 05:20:48.653309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.325 qpair failed and we were unable to recover it. 00:26:12.325 [2024-12-09 05:20:48.653495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.653510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.653696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.653712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.653864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.653878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.653974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.653985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.654239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.654260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.654520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.654537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.654686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.654705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.654889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.654903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.655054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.655070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.655165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.655179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.655272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.655287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.655523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.655543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.655633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.655648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.655812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.655828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.655934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.655949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.656203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.656219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.656313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.656328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.656436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.656451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.656605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.656619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.656899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.656914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.657035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.657047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.657194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.657206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.657368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.657379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.657460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.657470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.657576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.657587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.657740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.657751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.657847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.657858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.657942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.657953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.658111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.658122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.658284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.658296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.658447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.658458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.658564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.658575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.658669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.658680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.658784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.658796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.658885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.658895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.659054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.659066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.659214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.659225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.659325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.659335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.659417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.659428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.659532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.659544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.659645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.659657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.659782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.659793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.659952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.659964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.660118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.326 [2024-12-09 05:20:48.660129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.326 qpair failed and we were unable to recover it. 00:26:12.326 [2024-12-09 05:20:48.660292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.660303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.660401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.660412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.660506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.660521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.660687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.660699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.660791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.660802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.660942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.660953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.661111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.661123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.661213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.661224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.661307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.661318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.661483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.661494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.661661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.661671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.661751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.661762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.661975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.661986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.662101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.662113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.662276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.662288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.662458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.662470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.662675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.662687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.662770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.662781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.663014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.663027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.663183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.663194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.663345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.663357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.663588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.663599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.663825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.663837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.664094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.664107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.664221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.664232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.664444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.664456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.664550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.664560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.664701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.664714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.664925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.664937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.665096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.665109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.665191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.665202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.665291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.665302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.665383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.665394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.665493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.665503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.665589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.665600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.665775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.665787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.665953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.665965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.666070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.666082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.666253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.666265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.666368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.666378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.666522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.666534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.666636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.666649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.666747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.666761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.666918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.666930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.327 [2024-12-09 05:20:48.667018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.327 [2024-12-09 05:20:48.667029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.327 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.667151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.667162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.667269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.667280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.667374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.667386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.667484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.667496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.667587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.667598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.667741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.667751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.667839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.667851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.668009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.668021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.668113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.668125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.668369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.668379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.668482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.668493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.668747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.668776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.668867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.668878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.669053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.669065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.669265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.669276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.669432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.669443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.669545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.669556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.669646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.669657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.669816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.669827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.669920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.669932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.670036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.670047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.670277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.670288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.670388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.670399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.670494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.670506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.670668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.670681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.670860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.670872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.671104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.671116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.671209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.671219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.671299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.671309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.671412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.671424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.671584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.671595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.671685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.671695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.671960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.671971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.672137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.672148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.672397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.672409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.672553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.672564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.672746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.672758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.672853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.672867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.673045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.673057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.328 [2024-12-09 05:20:48.673213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.328 [2024-12-09 05:20:48.673225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.328 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.673432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.673443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.673652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.673665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.673751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.673761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.673978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.673990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.674163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.674174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.674346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.674357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.674461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.674473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.674671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.674682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.674919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.674933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.675114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.675127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.675227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.675238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.675401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.675413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.675567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.675577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.675723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.675733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.675912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.675924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.676046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.676057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.676155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.676167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.676269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.676280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.676368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.676380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.676469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.676480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.676557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.676566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.676708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.676718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.676893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.676906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.677047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.677059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.677222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.677234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.677397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.677408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.677510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.677521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.677605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.677618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.677785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.677796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.677876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.677886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.678032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.678043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.678204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.678215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.678370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.678382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.678570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.678583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.678806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.678819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.679068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.679079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.679253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.679265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.679437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.679450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.679713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.679725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.679798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.679808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.679955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.679966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.680112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.680125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.680357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.680368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.680487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.329 [2024-12-09 05:20:48.680498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.329 qpair failed and we were unable to recover it. 00:26:12.329 [2024-12-09 05:20:48.680645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.680656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.680830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.680843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.680928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.680940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.681026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.681038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.681264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.681278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.681431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.681442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.681605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.681618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.681779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.681791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.681896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.681908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.682130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.682142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.682235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.682246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.682460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.682471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.682598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.682619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.682723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.682734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.682903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.682914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.683054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.683066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.683271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.683283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.683387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.683399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.683614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.683637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.683741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.683753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.683911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.683924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.684094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.684106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.684329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.684340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.684453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.684464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.684591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.684603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.684735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.684745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.684818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.684828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.685006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.685018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.685239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.685251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.685348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.685359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.685554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.685566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.685669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.685681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.685916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.685928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.686084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.686096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.686247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.686259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.686410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.686423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.686628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.686651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.686805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.686816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.686964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.686974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.687186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.687199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.687319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.687331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.687475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.687486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.687750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.687762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.687956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.687966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.688145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.688158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.688369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.688381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.330 qpair failed and we were unable to recover it. 00:26:12.330 [2024-12-09 05:20:48.688576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.330 [2024-12-09 05:20:48.688588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.688777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.688788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.689010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.689021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.689198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.689210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.689388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.689399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.689502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.689514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.689726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.689737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.689947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.689958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.690171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.690183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.690325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.690337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.690491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.690503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.690666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.690688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.690843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.690855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.691012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.691040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.691121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.691135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.691228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.691239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.691473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.691486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.691704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.691716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.691981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.691993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.692169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.692181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.692336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.692349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.692512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.692524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.692682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.692694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.692917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.692930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.693009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.693020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.693188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.693200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.693365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.693377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.693553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.693565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.693732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.693745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.693942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.693955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.694052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.694064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.694148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.694159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.694405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.694416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.694528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.694541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.694690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.694702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.694896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.694910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.695153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.695176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.695261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.695271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.695376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.695388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.695540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.695551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.695728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.695740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.696005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.696017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.696151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.696162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.696381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.696393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.696554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.696565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.696760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.696771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.697008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.697020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.697148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.331 [2024-12-09 05:20:48.697159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.331 qpair failed and we were unable to recover it. 00:26:12.331 [2024-12-09 05:20:48.697262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.697274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.697417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.697428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.697570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.697581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.697815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.697827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.698085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.698098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.698215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.698227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.698317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.698331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.698423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.698435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.698595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.698607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.698696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.698705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.698848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.698859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.698955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.698968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.699117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.699129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.699276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.699286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.699448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.699459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.699662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.699673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.699897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.699909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.700133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.700144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.700261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.700273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.700449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.700460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.700664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.700675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.700844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.700855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.701034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.701045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.701209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.701221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.701401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.701412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.701572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.701583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.701819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.701830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.702066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.702099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.702242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.702273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.702529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.702561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.702777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.702788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.703014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.703025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.703181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.703192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.703377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.703388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.703652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.703663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.703815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.703827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.703978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.703989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.704111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.704122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.704290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.704301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.704479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.704488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.704665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.704677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.704902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.704913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.704986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.704997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.705141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.705152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.705408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.705419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.332 [2024-12-09 05:20:48.705571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.332 [2024-12-09 05:20:48.705584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.332 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.705678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.705692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.705936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.705949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.706093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.706105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.706264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.706276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.706384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.706395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.706568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.706580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.706665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.706675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.706775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.706787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.706886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.706899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.706994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.707020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.707212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.707225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.707391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.707403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.707630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.707641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.707803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.707817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.707971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.707983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.708268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.708281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.708556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.708568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.708783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.708796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.708951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.708963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.709107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.709129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.709223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.709234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.709381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.709392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.709499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.709510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.709663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.709676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.709835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.709849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.710075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.710088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.710248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.710261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.710411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.710423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.710518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.710529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.710610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.710621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.710800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.710814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.710969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.710982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.711187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.711201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.711294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.711305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.711469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.711483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.711650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.711663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.711829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.711842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.712092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.712105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.712180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.712192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.712332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.712345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.712435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.712449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.712590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.712603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.712779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.712792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.712868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.712878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.713023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.713037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.713231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.713244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.713386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.713399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.713570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.713583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.713686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.713698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.713866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.713879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.333 [2024-12-09 05:20:48.714099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.333 [2024-12-09 05:20:48.714111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.333 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.714213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.714225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.714386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.714400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.714637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.714650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.714912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.714924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.715018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.715030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.715215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.715229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.715388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.715400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.715522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.715536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.715688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.715702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.715925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.715938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.716171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.716185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.716407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.716420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.716671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.716685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.716835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.716848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.717095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.717109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.717258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.717272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.717447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.717459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.717599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.717613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.717772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.717784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.717996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.718016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.718164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.718178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.718355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.718368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.718522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.718535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.718771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.718784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.718975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.718989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.719149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.719162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.719322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.719335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.719498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.719510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.719727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.719740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.719911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.719926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.720123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.720137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.720225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.720237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.720420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.720433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.720581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.720593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.720692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.720703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.720933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.720946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.721053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.721065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.721219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.721231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.721317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.721328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.721469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.721482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.721645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.721658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.721810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.721823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.721912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.721923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.722096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.722110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.722209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.722221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.722361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.722374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.722539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.722554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.722669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.722682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.722844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.722857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.723091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.723104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.334 qpair failed and we were unable to recover it. 00:26:12.334 [2024-12-09 05:20:48.723291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.334 [2024-12-09 05:20:48.723304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.723484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.723496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.723683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.723696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.723861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.723874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.724118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.724131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.724298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.724311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.724550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.724564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.724661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.724674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.724889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.724902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.725048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.725061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.725282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.725296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.725515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.725528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.725716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.725730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.725961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.725973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.726143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.726156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.726368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.726380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.726609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.726623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.726781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.726793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.726936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.726950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.727166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.727182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.727325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.727338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.727485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.727497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.727665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.727677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.727980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.727994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.728177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.728190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.728359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.728371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.728468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.728481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.728639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.728652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.728904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.728919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.729105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.729119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.729354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.729367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.729576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.729588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.729796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.729809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.729910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.729921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.730079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.730092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.730181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.730192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.730273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.730284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.730433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.730446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.730655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.730669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.730838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.730851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.730996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.731015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.731197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.731210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.731439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.731453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.731557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.731570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.731665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.731677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.731837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.731850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.335 qpair failed and we were unable to recover it. 00:26:12.335 [2024-12-09 05:20:48.732070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.335 [2024-12-09 05:20:48.732107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.732276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.732294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.732404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.732421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.732519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.732535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.732706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.732723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.732888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.732905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.733106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.733124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.733282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.733300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.733492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.733508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.733714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.733730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.733886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.733903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.734073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.734091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.734356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.734372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.734586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.734603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.734797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.734813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.735061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.735079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.735242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.735260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.735410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.735428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.735598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.735616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.735786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.735802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.735969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.735985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.736147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.736164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.736387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.736405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.736624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.736640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.736906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.736922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.737031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.737050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.737229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.737245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.737466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.737502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.737690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.737705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.737879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.737891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.738103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.738116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.738223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.738236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.738458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.738470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.738700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.738714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.738990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.739007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.739222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.739235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.739381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.739393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.739480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.739492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.739662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.739676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.739767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.739778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.740037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.740053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.740293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.740306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.740463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.740477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.740685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.740698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.740956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.740969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.741064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.741076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.741335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.741348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.741525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.741538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.741726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.741738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.742025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.742038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.742274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.742288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.336 [2024-12-09 05:20:48.742458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.336 [2024-12-09 05:20:48.742471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.336 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.742576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.742590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.742748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.742761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.742847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.742860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.743069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.743082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.743231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.743245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.743358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.743372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.743522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.743536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.743771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.743784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.743977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.743989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.744101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.744113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.744341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.744354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.744453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.744465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.744613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.744626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.744867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.744880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.745072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.745085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.745232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.745247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.745420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.745433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.745595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.745608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.745769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.745782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.746012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.746026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.746193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.746207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.746460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.746473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.746655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.746669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.746836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.746849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.747018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.747031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.747254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.747267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.747437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.747450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.747536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.747547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.747757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.747769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.748030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.748043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.748199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.748212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.748302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.748313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.748525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.748538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.748675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.748687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.748842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.748855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.748939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.748951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.749126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.749139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.749344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.749357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.749530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.749543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.749692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.749705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.749847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.749860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.750034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.750048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.750291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.750305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.750517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.750530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.750802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.750815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.751024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.751036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.751244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.751258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.751414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.751427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.751523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.751534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.751702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.751715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.751885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.751898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.751986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.752003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.337 qpair failed and we were unable to recover it. 00:26:12.337 [2024-12-09 05:20:48.752093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.337 [2024-12-09 05:20:48.752105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.752181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.752192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.752352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.752365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.752520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.752536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.752743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.752755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.752902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.752914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.753050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.753064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.753222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.753236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.753381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.753393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.753558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.753571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.753754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.753766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.753877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.753890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.754045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.754059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.754209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.754223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.754434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.754448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.754658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.754671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.754822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.754834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.754993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.755011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.755217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.755231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.755310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.755321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.755529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.755541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.755652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.755665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.755772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.755786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.755879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.755891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.756058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.756072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.756295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.756308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.756564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.756578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.756816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.756829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.756992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.757011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.757262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.757274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.757457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.757470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.757656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.757669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.757903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.757916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.757982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.757993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.758162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.758175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.758320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.758334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.758500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.758513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.758672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.758684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.758828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.758841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.759009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.759023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.759105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.759115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.759287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.759300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.759533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.759545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.759701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.759716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.759911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.759924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.760007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.760019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.760205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.760218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.760377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.760389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.760601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.760613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.760754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.760767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.760865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.760877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.760975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.760987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.761241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.761263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.761458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.761476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.338 qpair failed and we were unable to recover it. 00:26:12.338 [2024-12-09 05:20:48.761728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.338 [2024-12-09 05:20:48.761745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.761973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.761991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.762213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.762231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.762457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.762475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.762579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.762596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.762748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.762764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.762941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.762958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.763180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.763196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.763315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.763330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.763482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.763500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.763724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.763743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.763831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.763846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.763995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.764019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.764181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.764198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.764428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.764444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.764632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.764648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.764809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.764826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.765086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.765104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.765402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.765417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.765629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.765642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.765799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.765811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.766040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.766053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.766266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.766279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.766443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.766455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.766663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.766678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.766755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.766766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.766980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.766994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.767195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.767207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.767398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.767411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.767622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.767638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.767779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.767792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.767952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.767965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.768123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.768137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.768340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.768354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.768509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.768521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.768680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.768692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.768990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.769008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.769120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.769133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.769354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.769368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.769525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.769539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.769683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.769697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.769867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.769880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.770119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.770133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.770300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.770314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.770467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.770480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.770632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.770646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.770786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.770800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.771046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.771060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.771272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.771285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.771367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.771379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.771477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.771489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.771595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.339 [2024-12-09 05:20:48.771608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.339 qpair failed and we were unable to recover it. 00:26:12.339 [2024-12-09 05:20:48.771689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.771701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.771853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.771865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.772018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.772032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.772125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.772137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.772318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.772331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.772500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.772514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.772750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.772764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.772940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.772954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.773189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.773202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.773389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.773403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.773559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.773574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.773720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.773733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.773968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.773981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.774186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.774200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.774340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.774353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.774510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.774523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.774775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.774788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.774971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.774986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.775092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.775114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.775217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.775232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.775477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.775494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.775594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.775611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.775755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.775773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.775927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.775945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.776034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.776051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.776211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.776227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.776389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.776407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.776503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.776519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.776622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.776637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.776784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.776801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.776886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.776900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.777145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.777158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.777316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.777328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.777569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.777583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.777686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.777699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.777803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.777815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.777894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.777905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.778054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.778067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.778224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.778236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.778319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.778330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.778479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.778492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.778654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.778667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.778814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.778828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.779010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.779027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.779138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.779150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.779257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.779270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.779432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.779446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.779551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.779564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.779676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.779690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.779793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.779805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.779891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.779903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.780017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.780030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.780132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.780145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.780224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.780235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.780329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.780340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.780502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.780515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.780610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.340 [2024-12-09 05:20:48.780621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.340 qpair failed and we were unable to recover it. 00:26:12.340 [2024-12-09 05:20:48.780787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.780803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.780952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.780965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.781133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.781146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.781298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.781312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.781461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.781474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.781625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.781638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.781783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.781797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.781884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.781897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.782062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.782076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.782155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.782167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.782238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.782249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.782403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.782416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.782502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.782514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.782606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.782618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.782781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.782794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.782893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.782904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.782988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.783008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.783164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.783177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.783263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.783274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.783355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.783366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.783518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.783530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.783622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.783633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.783788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.783801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.783894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.783907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.784086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.784099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.784197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.784208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.784336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.784349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.784498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.784512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.784656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.784668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.784749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.784760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.784969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.784983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.785152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.785165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.785259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.785278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.785556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.785569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.785663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.785676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.785835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.785847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.785945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.785958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.786108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.786122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.786268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.786282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.786425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.786439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.786544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.786558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.786741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.786754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.786850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.786863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.787046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.787059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.787159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.787170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.787341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.787355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.787523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.787535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.787612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.787623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.787779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.787793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.787939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.787951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.788103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.788117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.788275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.788287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.788432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.788444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.788616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.788628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.788772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.788785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.788877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.788890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.789028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.341 [2024-12-09 05:20:48.789041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.341 qpair failed and we were unable to recover it. 00:26:12.341 [2024-12-09 05:20:48.789196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.789209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.789372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.789386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.789546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.789559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.789702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.789714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.789814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.789828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.790017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.790031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.790109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.790121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.790214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.790226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.790302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.790314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.790477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.790490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.790597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.790633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.790734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.790752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.790856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.790875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.790958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.790973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.791088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.791107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.791275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.791292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.791391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.791408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.791653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.791669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.791776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.791791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.791911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.791928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.792084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.792101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.792197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.792211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.792326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.792344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.792465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.792487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.792574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.792591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.792774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.792791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.792952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.792968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.793153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.793170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.793339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.793357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.793534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.793550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.793705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.793721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.793824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.793842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.794009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.794027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.794113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.794128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.794225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.794242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.342 [2024-12-09 05:20:48.794409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.342 [2024-12-09 05:20:48.794426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.342 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.794499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.794514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.794610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.794627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.794746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.794762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.795032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.795050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.795205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.795222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.795359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.795375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.795483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.795501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.795594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.795610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.795833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.795849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.795964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.795981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.796232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.796247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.796346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.796363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.796465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.796483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.796577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.796596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.796746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.796764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.796944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.796960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.797128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.797149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.797260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.797276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.797372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.797389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.797503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.797519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.797623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.797640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.797822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.797838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.797924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.797940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.798119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.798136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.798270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.798287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.798440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.798457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.798558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.798576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.798769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.798788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.798954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.798971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.799093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.799110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.799291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.799306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.799460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.799476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.799637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.799653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.343 qpair failed and we were unable to recover it. 00:26:12.343 [2024-12-09 05:20:48.799753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.343 [2024-12-09 05:20:48.799769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.799931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.799947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.800094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.800107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.800214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.800228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.800326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.800339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.800426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.800438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.800674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.800686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.800776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.800788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.800930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.800944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.801008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.801019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.801120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.801132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.801222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.801235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.801384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.801397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.801581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.801594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.801749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.801763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.801846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.801858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.801924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.801934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.802018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.802031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.802220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.802233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.802409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.802422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.802577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.802589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.802801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.802820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.802916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.802933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.803161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.803179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.803282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.803298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.803383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.803399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.803511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.803528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.803681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.803698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.803793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.803809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.803922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.803939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.804051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.804068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.804170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.804187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.804274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.804290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.804433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.344 [2024-12-09 05:20:48.804450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.344 qpair failed and we were unable to recover it. 00:26:12.344 [2024-12-09 05:20:48.804655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.804675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.804841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.804858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.805044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.805061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.805157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.805173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.805275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.805291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.805402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.805419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.805558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.805574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.805727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.805741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.805887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.805900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.806057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.806070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.806223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.806235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.806393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.806405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.806506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.806518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.806599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.806610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.806768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.806780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.806957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.806969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.807071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.807084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.807315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.807327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.807413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.807425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.807604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.807617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.807761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.807773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.807872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.807884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.808066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.808078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.808241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.808254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.808488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.808501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.808596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.808607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.808684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.808695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.808935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.808952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.809043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.809059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.809166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.809183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.809350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.809366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.809527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.809544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.809640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.809655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.809899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.345 [2024-12-09 05:20:48.809915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.345 qpair failed and we were unable to recover it. 00:26:12.345 [2024-12-09 05:20:48.810095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.810112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.810253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.810268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.810440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.810456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.810606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.810622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.810774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.810791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.811009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.811026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.811127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.811146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.811310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.811326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.811478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.811494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.811601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.811617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.811787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.811804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.811955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.811971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.812111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.812128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.812299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.812315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.812471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.812487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.812702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.812718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.812956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.812972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.813118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.813135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.813294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.813310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.813494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.813510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.813664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.813680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.813829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.813845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.814059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.814077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.814227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.814243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.814407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.814424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.814608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.814624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.814732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.814748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.814984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.815004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.815202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.815218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.815455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.815471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.815716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.815732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.815976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.815992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.816157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.346 [2024-12-09 05:20:48.816174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.346 qpair failed and we were unable to recover it. 00:26:12.346 [2024-12-09 05:20:48.816395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.816409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.816628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.816641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.816784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.816796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.817020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.817034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.817189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.817202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.817426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.817439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.817652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.817665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.817887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.817900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.818065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.818078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.818190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.818203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.818411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.818423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.818652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.818665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.818873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.818885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.819142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.819158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.819245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.819258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.819345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.819358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.819574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.819587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.819844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.819857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.820021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.820034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.820276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.820288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.820473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.820486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.820710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.820723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.820864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.820876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.821019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.821033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.821224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.821237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.821327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.821339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.821498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.821511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.821602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.821615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.821772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.821785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.821995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.347 [2024-12-09 05:20:48.822013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.347 qpair failed and we were unable to recover it. 00:26:12.347 [2024-12-09 05:20:48.822188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.822202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.822411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.822424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.822585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.822598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.822683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.822695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.822868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.822882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.823029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.823043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.823148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.823161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.823383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.823396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.823653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.823666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.823826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.823839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.824080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.824097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.824354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.824370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.824547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.824564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.824738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.824754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.824943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.824959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.825164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.825181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.825431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.825447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.825611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.825627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.825805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.825821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.825990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.826011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.826250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.826267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.826419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.826436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.826601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.826617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.826768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.826787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.826952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.826968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.827207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.827224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.827388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.827405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.827570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.827586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.827834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.827850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.828024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.828040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.828205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.828222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.828470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.828486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.828582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.828598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.828815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.348 [2024-12-09 05:20:48.828832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.348 qpair failed and we were unable to recover it. 00:26:12.348 [2024-12-09 05:20:48.829084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.829101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.829227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.829242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.829441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.829458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.829626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.829642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.829813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.829829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.830070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.830087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.830332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.830349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.830514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.830530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.830641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.830657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.830818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.830834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.831026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.831043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.831208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.831224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.831461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.831477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.831692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.831708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.831902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.831919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.832098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.832116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.832290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.832310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.832498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.832514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.832675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.832691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.832933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.832949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.833112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.833128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.833241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.833257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.833472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.833489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.833734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.833750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.833984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.834011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.834222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.834238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.834510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.834527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.834626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.834642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.834836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.834852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.835082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.835099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.835322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.835339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.835490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.835507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.835722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.835739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.349 qpair failed and we were unable to recover it. 00:26:12.349 [2024-12-09 05:20:48.835851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.349 [2024-12-09 05:20:48.835866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.836084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.836101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.836347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.836363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.836525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.836541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.836673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.836689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.836843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.836860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.837030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.837047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.837274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.837290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.837531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.837548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.837719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.837735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.837910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.837927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.838162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.838179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.838445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.838461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.838689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.838705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.838937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.838953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.839171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.839190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.839437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.839454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.839561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.839577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.839795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.839812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.839926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.839943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.840109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.840126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.840244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.840260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.840441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.840457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.840627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.840649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.840885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.840901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.841007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.841023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.841287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.841305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.841491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.841507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.841739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.841755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.841917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.841933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.842151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.842168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.842385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.842401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.842600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.842617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.350 [2024-12-09 05:20:48.842723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.350 [2024-12-09 05:20:48.842737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.350 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.842921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.842938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.843156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.843173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.843342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.843359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.843565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.843582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.843743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.843760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.843982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.844004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.844183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.844200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.844418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.844435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.844534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.844549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.844718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.844734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.844832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.844847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.845090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.845108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.845239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.845255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.845423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.845440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.845608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.845625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.845842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.845858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.846064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.846082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.846328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.846344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.846517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.846533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.846717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.846733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.846898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.846914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.847071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.847088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.847315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.847331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.847518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.847534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.847858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.847875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.848142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.848157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.848352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.848365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.848529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.848543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.848753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.848767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.848992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.849011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.849181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.849194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.849346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.849360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.849453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.849464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.849704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.351 [2024-12-09 05:20:48.849718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.351 qpair failed and we were unable to recover it. 00:26:12.351 [2024-12-09 05:20:48.849825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.849837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.850011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.850025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.850220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.850235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.850411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.850423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.850606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.850619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.850860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.850875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.851040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.851054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.851259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.851272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.851426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.851440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.851587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.851600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.851697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.851708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.851892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.851905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.852085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.852099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.852255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.852268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.852377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.852392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.852493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.852504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.852715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.852729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.852894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.852908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.853074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.853088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.853244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.853257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.853362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.853373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.853584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.853597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.853762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.853776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.853986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.854001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.854146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.854159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.854402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.854415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.854619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.854632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.854791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.854804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.854957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.352 [2024-12-09 05:20:48.854969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.352 qpair failed and we were unable to recover it. 00:26:12.352 [2024-12-09 05:20:48.855052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.855063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.855299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.855311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.855471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.855484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.855714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.855726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.855977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.855990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.856137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.856150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.856328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.856344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.856508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.856522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.856619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.856630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.856809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.856822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.856924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.856936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.857147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.857163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.857402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.857414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.857662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.857676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.857917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.857929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.858131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.858145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.858311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.858324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.858469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.858482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.858577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.858588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.858764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.858777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.858919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.858932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.859138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.859151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.859309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.859323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.859563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.859576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.859655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.859667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.859832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.859846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.860057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.860071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.860146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.860158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.860251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.860263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.860358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.860371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.860583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.860595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.860831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.860844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.860997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.861014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.861218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.861231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.353 qpair failed and we were unable to recover it. 00:26:12.353 [2024-12-09 05:20:48.861471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.353 [2024-12-09 05:20:48.861486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.354 qpair failed and we were unable to recover it. 00:26:12.354 [2024-12-09 05:20:48.861715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.354 [2024-12-09 05:20:48.861728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.354 qpair failed and we were unable to recover it. 00:26:12.354 [2024-12-09 05:20:48.861881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.354 [2024-12-09 05:20:48.861895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.354 qpair failed and we were unable to recover it. 00:26:12.354 [2024-12-09 05:20:48.862064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.354 [2024-12-09 05:20:48.862077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.862232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.862245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.862457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.862470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.862698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.862713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.862940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.862953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.863206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.863220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.863364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.863377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.863555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.863569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.863721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.863734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.863821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.863836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.863919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.863931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.864115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.864129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.864358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.864372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.864676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.864689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.864794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.864807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.864954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.864967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.865130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.865143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.865288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.865301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.865381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.865394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.865605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.865618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.865738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.865752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.866047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.866061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.866214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.866228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.866469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.866482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.866743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.866757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.866913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.866926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.867113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.867128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.867274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.867287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.867508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.867521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.867679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.867692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.867870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.867883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.868137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.868151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.868336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.868350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.868513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.868526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.635 [2024-12-09 05:20:48.868734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.635 [2024-12-09 05:20:48.868748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.635 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.868985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.869002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.869209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.869223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.869464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.869478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.869716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.869729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.869871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.869885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.870029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.870043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.870199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.870213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.870460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.870473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.870617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.870631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.870736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.870749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.870911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.870925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.871078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.871092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.871246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.871259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.871442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.871454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.871545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.871559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.871798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.871811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.871969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.871982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.872175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.872189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.872288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.872302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.872401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.872413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.872582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.872595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.872679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.872690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.872770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.872781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.873052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.873066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.873215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.873228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.873528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.873541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.873724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.873737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.873916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.873930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.874167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.874183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.874270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.874282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.874538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.874553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.874772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.874784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.875045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.875059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.875319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.875332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.875559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.875573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.875760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.875772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.875914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.875928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.876091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.876104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.876248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.876262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.876408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.876421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.876592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.876605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.876770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.876783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.877020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.877033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.877277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.877290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.877470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.877483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.877694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.877708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.877919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.877932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.878161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.878174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.878419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.878433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.878627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.878641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.878798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.878810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.636 qpair failed and we were unable to recover it. 00:26:12.636 [2024-12-09 05:20:48.878895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.636 [2024-12-09 05:20:48.878907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.879116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.879130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.879294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.879307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.879453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.879469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.879623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.879636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.879787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.879801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.879945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.879958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.880123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.880137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.880318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.880331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.880546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.880558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.880705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.880718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.880932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.880944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.881178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.881192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.881480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.881494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.881601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.881624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.881833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.881846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.882058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.882072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.882306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.882319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.882557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.882570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.882713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.882726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.882805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.882816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.883004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.883017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.883252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.883264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.883439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.883452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.883712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.883724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.883948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.883962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.884122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.884135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.884363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.884375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.884548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.884561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.884716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.884751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.884951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.884985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.885211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.885224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.885387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.885400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.885633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.885646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.885789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.885801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.885901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.885913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.886075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.886088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.886247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.886260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.886377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.886390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.886547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.886560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.886721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.886755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.886887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.886919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.887143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.887178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.887488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.887528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.887663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.887696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.887889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.887923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.888059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.888072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.888232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.888259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.888536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.888570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.888792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.888824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.889067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.889081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.889315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.889329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.637 [2024-12-09 05:20:48.889580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.637 [2024-12-09 05:20:48.889615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.637 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.889762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.889795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.890082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.890117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.890313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.890326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.890517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.890550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.890770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.890803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.891081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.891093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.891238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.891251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.891405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.891419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.891580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.891614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.891897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.891930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.892215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.892251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.892529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.892561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.892814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.892848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.893126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.893161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.893347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.893360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.893524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.893557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.893833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.893867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.894072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8db20 is same with the state(6) to be set 00:26:12.638 [2024-12-09 05:20:48.894402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.894440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.894650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.894688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.894824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.894842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.895025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.895063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.895275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.895308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.895451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.895485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.895697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.895730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.895922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.895954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.896159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.896206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.896456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.896473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.896757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.896790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.897009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.897044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.897270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.897304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.897459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.897501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.897639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.897675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.897888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.897922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.898203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.898240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.898528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.898561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.898846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.898878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.899088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.899123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.899329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.899363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.899619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.899652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.899860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.899895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.900169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.900204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.900348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.900382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.900572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.900605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.900811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.900854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.901017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.638 [2024-12-09 05:20:48.901065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.638 qpair failed and we were unable to recover it. 00:26:12.638 [2024-12-09 05:20:48.901306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.901324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.901475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.901491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.901733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.901749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.901875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.901908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.902174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.902209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.902412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.902446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.902715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.902759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.902928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.902946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.903118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.903136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.903383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.903419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.903619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.903651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.903864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.903898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.904092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.904128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.904275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.904308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.904557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.904591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.904800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.904833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.905038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.905074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.905281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.905314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.905512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.905547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.905761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.905795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.906097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.906132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.906391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.906407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.906565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.906583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.906766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.906782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.906955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.906988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.907224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.907260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.907502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.907537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.907731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.907764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.907989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.908011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.908182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.908215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.908350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.908384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.908588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.908622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.908904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.908938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.909209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.909244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.909535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.909569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.909850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.909884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.910077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.910112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.910263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.910296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.910594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.910635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.910773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.910809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.911020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.911055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.911264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.911281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.911394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.911408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.911622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.911638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.911811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.911827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.912025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.912060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.912268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.912302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.912587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.912622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.912871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.912888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.912994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.913014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.913223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.913257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.913463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.913495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.913695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.913728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.913923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.639 [2024-12-09 05:20:48.913956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.639 qpair failed and we were unable to recover it. 00:26:12.639 [2024-12-09 05:20:48.914165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.914200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.914394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.914427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.914705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.914745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.914943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.914959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.915234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.915253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.915494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.915511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.915744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.915778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.915972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.915988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.916156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.916189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.916421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.916454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.916712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.916746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.916954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.916988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.917295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.917331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.917547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.917581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.917867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.917901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.918057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.918093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.918323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.918366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.918586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.918604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.918829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.918846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.919149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.919182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.919392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.919427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.919716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.919749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.919975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.920019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.920258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.920275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.920382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.920402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.920643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.920659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.920854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.920871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.921111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.921128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.921307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.921323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.921505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.921523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.921753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.921770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.921915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.921933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.922173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.922190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.922437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.922471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.922667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.922700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.922939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.922973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.923263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.923310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.923416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.923433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.923677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.923695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.923933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.923949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.924138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.924156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.924319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.924336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.924515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.924532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.924695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.924712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.924987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.925009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.925262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.925278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.925418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.925435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.925577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.925594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.925793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.925810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.925913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.925927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.926070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.926088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.926268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.926284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.926393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.926408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.926506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.926520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.926640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.926656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.640 qpair failed and we were unable to recover it. 00:26:12.640 [2024-12-09 05:20:48.926758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.640 [2024-12-09 05:20:48.926774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.926932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.926948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.927060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.927076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.927232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.927250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.927413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.927429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.927703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.927720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.927828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.927845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.928077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.928095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.928343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.928359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.928607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.928626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.928803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.928820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.929064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.929081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.929192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.929207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.929337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.929354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.929507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.929524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.929599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.929615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.929709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.929725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.929831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.929847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.930029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.930046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.930230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.930247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.930463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.930479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.930734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.930750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.930856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.930870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.930981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.931023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.931154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.931188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.931430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.931463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.931750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.931786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.932011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.932045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.932232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.932248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.932418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.932451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.932748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.932782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.932963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.932996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.933260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.933277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.933367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.933382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.933564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.933598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.933804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.933837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.934133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.934184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.934387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.934402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.934585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.934602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.934725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.934758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.934903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.934935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.935234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.935269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.935531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.935564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.935847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.935887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.936175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.936209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.936354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.936389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.936593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.936630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.936829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.936862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.937142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.937178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.937337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.937353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.937548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.937581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.937841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.937875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.938099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.641 [2024-12-09 05:20:48.938134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.641 qpair failed and we were unable to recover it. 00:26:12.641 [2024-12-09 05:20:48.938398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.938414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.938661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.938679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.938926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.938943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.939107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.939124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.939230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.939247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.939434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.939450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.939554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.939569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.939720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.939736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.939917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.939934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.940100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.940135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.940355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.940388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.940610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.940642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.940863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.940898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.941082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.941098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.941341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.941374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.941572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.941607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.941838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.941871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.942122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.942139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.942306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.942324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.942568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.942602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.942871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.942903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.943117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.943152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.943401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.943417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.943585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.943604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.943779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.943813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.944089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.944123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.944276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.944292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.944478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.944511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.944765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.944798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.945006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.945041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.945310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.945343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.945580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.945614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.945859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.945892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.946113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.946147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.946401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.946417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.946535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.946568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.946825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.946859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.947099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.947135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.947362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.947396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.947606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.947622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.947794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.947810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.948010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.948045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.948322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.948355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.948544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.948577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.948845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.948878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.949031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.949048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.949156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.949170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.949390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.949425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.949644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.949676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.949908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.949942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.950238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.950280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.950525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.950542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.950734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.950750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.950924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.950941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.951211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.951246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.951524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.951558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.951774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.951808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.952080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.952097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.642 qpair failed and we were unable to recover it. 00:26:12.642 [2024-12-09 05:20:48.952261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.642 [2024-12-09 05:20:48.952278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.952433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.952467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.952662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.952695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.952970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.953012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.953159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.953193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.953336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.953374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.953587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.953620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.953885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.953918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.954181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.954198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.954421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.954454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.954729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.954762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.955048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.955082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.955372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.955406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.955682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.955716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.956011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.956044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.956244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.956261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.956503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.956520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.956674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.956691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.956964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.957004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.957225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.957260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.957449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.957483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.957739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.957773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.957978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.957995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.958132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.958149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.958313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.958329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.958490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.958507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.958679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.958713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.959017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.959053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.959352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.959386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.959650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.959682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.959935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.959970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.960257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.960329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.960650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.960689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.960913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.960948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.961152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.961186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.961466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.961500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.961767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.961801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.962013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.962047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.962321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.962355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.962587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.962622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.962847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.962880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.963072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.963085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.963189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.963200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.963415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.963447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.963648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.963681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.963955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.963996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.964290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.964324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.964634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.964668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.964974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.965016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.965278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.965312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.965487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.965500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.965597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.965608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.965871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.965905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.966116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.966151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.966421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.966454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.966660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.966694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.966885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.966918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.967167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.967180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.967341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.967375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.643 [2024-12-09 05:20:48.967649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.643 [2024-12-09 05:20:48.967683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.643 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.967837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.967870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.968063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.968097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.968358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.968392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.968601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.968634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.968823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.968855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.969131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.969167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.969383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.969416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.969612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.969646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.969917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.969951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.970104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.970138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.970337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.970370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.970574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.970608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.970823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.970856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.970985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.971002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.971176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.971209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.971443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.971477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.971764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.971797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.971945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.971979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.972197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.972231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.972442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.972475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.972608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.972641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.972789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.972822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.973099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.973134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.973286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.973319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.973437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.973450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.973622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.973661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.973920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.973953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.974172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.974207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.974409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.974443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.974706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.974740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.974978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.975028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.975180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.975213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.975463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.975476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.975686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.975699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.975806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.975818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.975907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.975918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.976088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.976102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.976319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.976353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.976634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.976667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.976881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.976916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.977115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.977150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.977376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.977409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.977621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.977654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.977865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.977898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.978165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.978178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.978423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.978456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.978716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.978749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.978875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.978919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.979134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.979148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.979311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.979324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.979489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.979522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.979675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.979708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.979971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.980014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.980224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.980257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.980445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.980477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.980660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.980672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.980852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.644 [2024-12-09 05:20:48.980885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.644 qpair failed and we were unable to recover it. 00:26:12.644 [2024-12-09 05:20:48.981026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.981062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.981187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.981221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.981425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.981457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.981715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.981748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.981936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.981949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.982109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.982122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.982213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.982224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.982434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.982446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.982542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.982555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.982704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.982737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.982861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.982905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.983050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.983062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.983280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.983313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.983463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.983495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.983642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.983675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.983945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.983986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.984224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.984237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.984380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.984394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.984550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.984563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.984805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.984838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.984990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.985035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.985317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.985350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.985503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.985536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.985687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.985720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.985855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.985887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.986088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.986126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.986389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.986402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.986559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.986572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.986787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.986799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.986991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.987022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.987133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.987144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.987400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.987412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.987578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.987591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.987765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.987777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.987940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.987953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.988044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.988057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.988142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.988153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.988322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.988356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.988477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.988509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.988712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.988745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.988882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.988915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.989103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.989138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.989403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.989437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.989532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.989542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.989705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.989716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.989964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.989977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.990168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.990200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.990406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.990440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.990593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.990632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.990889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.990922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.991148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.991183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.991384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.991417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.991691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.991723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.991924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.991957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.992172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.992206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.992430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.992464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.992660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.992692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.992835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.992868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.993064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.993078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.645 [2024-12-09 05:20:48.993224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.645 [2024-12-09 05:20:48.993237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.645 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.993458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.993470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.993613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.993636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.993742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.993754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.993970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.993982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.994220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.994233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.994450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.994462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.994612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.994624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.994799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.994831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.995080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.995114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.995381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.995393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.995642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.995656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.995867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.995880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.996051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.996065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.996287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.996332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.996576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.996619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.996844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.996887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.997120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.997166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.997306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.997343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.997553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.997565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.997667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.997678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.997782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.997793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.997934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.997946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.998083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.998096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.998244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.998280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.998559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.998592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.998735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.998769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.998916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.998948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.999129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.999163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.999306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.999341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.999432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.999444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.999555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.999566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.999728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.999740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:48.999839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:48.999851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.000027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.000040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.000218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.000230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.000387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.000399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.000647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.000659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.000884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.000916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.001131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.001166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.001442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.001455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.001667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.001700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.001888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.001921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.002093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.002106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.002320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.002353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.002611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.002644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.002944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.002977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.003257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.003290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.003545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.003578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.003833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.003866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.004083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.004119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.004396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.004428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.004630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.004663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.004885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.004918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.005225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.005260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.005491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.005525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.005888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.005959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.006319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.006404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.006682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.006719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.006862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.006896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.007185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.007204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.007313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.646 [2024-12-09 05:20:49.007330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.646 qpair failed and we were unable to recover it. 00:26:12.646 [2024-12-09 05:20:49.007445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.007462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.007633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.007650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.007901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.007936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.008232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.008268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.008507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.008540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.008764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.008796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.009039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.009073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.009218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.009251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.009534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.009551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.009719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.009736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.009978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.009994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.010181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.010215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.010435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.010468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.010630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.010663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.010820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.010853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.011149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.011184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.011344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.011376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.011539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.011571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.011790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.011823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.012128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.012162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.012368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.012400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.012723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.012760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.013109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.013143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.013421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.013434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.013728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.013761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.013979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.014019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.014184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.014197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.014370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.014403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.014692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.014725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.014981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.015022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.015219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.015254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.015512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.015545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.015763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.015796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.015988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.016035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.016314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.016353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.016557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.016591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.016789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.016823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.017125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.017160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.017410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.017443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.017668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.017701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.017908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.017942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.018218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.018251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.018551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.018584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.018812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.018846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.019149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.019183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.019450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.019483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.019710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.019742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.020013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.020047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.020281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.020315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.020591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.020624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.020874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.020908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.021189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.021225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.021476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.021488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.021686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.021720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.021860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.021893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.647 qpair failed and we were unable to recover it. 00:26:12.647 [2024-12-09 05:20:49.022126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.647 [2024-12-09 05:20:49.022161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.022416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.022449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.022696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.022729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.022925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.022957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.023278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.023312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.023592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.023625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.023825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.023863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.024131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.024145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.024308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.024341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.024596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.024630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.024829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.024862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.025138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.025173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.025477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.025511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.025786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.025818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.026022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.026057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.026184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.026215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.026481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.026494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.026720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.026752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.027062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.027097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.027304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.027337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.027591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.027604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.027767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.027779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.027988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.028005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.028159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.028192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.028414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.028446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.028739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.028772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.029071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.029084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.029265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.029307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.029515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.029548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.029753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.029799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.030083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.030117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.030421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.030454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.030731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.030764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.030976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.031019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.031327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.031360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.031616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.031629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.031794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.031806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.032022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.032058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.032250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.032282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.032427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.032460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.032683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.032695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.032856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.032889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.033099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.033134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.033331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.033363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.033571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.033605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.033860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.033893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.034152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.034193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.034385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.034419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.034620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.034633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.034792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.034804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.035037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.035077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.035345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.035379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.035670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.035703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.035834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.035868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.036174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.036209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.036488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.036521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.036805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.036838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.037032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.037067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.037257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.037290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.037492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.037505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.037749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.037783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.038018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.038054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.648 qpair failed and we were unable to recover it. 00:26:12.648 [2024-12-09 05:20:49.038311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.648 [2024-12-09 05:20:49.038324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.038554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.038566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.038774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.038786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.038943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.038955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.039165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.039178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.039400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.039412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.039621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.039633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.039820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.039833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.039943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.039955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.040128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.040161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.040369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.040402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.040624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.040658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.040852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.040885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.041146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.041182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.041472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.041484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.041703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.041715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.041924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.041937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.042113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.042126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.042280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.042317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.042590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.042621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.042830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.042862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.043139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.043174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.043480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.043512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.043797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.043830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.044122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.044163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.044447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.044480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.044667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.044679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.044911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.044924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.045083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.045117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.045335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.045368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.045605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.045638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.045918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.045951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.046248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.046283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.046490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.046524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.046805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.046838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.047130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.047165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.047449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.047482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.047760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.047772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.047986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.048002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.048259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.048272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.048435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.048448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.048665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.048698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.048909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.048942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.049192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.049244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.049503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.049535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.049789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.049822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.050042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.050077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.050292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.050325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.050612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.050645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.050901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.050933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.051144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.051178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.051367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.051380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.051559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.051571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.051804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.051836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.052141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.052176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.052438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.052451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.052632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.052665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.052855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.052888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.053104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.053139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.053341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.053374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.053651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.053686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.649 [2024-12-09 05:20:49.053887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.649 [2024-12-09 05:20:49.053920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.649 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.054180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.054216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.054417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.054450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.054648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.054687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.054930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.054963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.055184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.055218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.055444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.055477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.055673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.055686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.055846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.055879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.056162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.056196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.056383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.056395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.056605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.056618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.056851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.056864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.057097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.057110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.057298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.057332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.057533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.057566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.057861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.057894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.058209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.058245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.058513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.058547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.058842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.058876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.059109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.059144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.059271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.059304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.059455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.059488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.059764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.059777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.059983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.060028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.060304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.060336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.060582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.060594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.060830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.060843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.061154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.061188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.061474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.061507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.061771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.061805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.062070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.062105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.062308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.062342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.062612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.062645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.062932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.062965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.063257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.063291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.063433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.063466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.063770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.063803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.064088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.064124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.064412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.064444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.064646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.064681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.064937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.064971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.065261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.065294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.065427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.065443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.065622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.065655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.065879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.065911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.066198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.066234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.066478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.066511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.066740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.066772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.067053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.067099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.067327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.067340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.067505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.067517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.067768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.067801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.068078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.068112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.068395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.068438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.650 [2024-12-09 05:20:49.068675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.650 [2024-12-09 05:20:49.068687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.650 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.068769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.068780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.068936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.068948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.069165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.069199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.069477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.069510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.069709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.069723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.069963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.069996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.070279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.070312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.070539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.070572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.070850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.070884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.071176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.071212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.071473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.071507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.071777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.071811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.072073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.072108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.072306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.072339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.072485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.072520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.072792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.072804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.073020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.073033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.073306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.073339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.073595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.073628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.073888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.073921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.074112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.074148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.074336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.074348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.074444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.074455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.074596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.074610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.074851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.074885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.075165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.075200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.075419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.075452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.075759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.075798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.076032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.076067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.076306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.076339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.076522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.076534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.076744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.076757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.076855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.076867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.077025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.077039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.077220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.077233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.077469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.077481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.077698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.077711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.077971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.078012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.078249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.078282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.078479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.078492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.078657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.078690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.078893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.078926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.079133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.079169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.079416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.079428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.079664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.079677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.079880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.079913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.080218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.080263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.080420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.080432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.080698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.080731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.081017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.081051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.081265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.081299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.081556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.081590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.081840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.081853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.082096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.082110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.082352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.082383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.082574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.082607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.082872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.082906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.083184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.083220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.083466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.083499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.083784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.083818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.084054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.084089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.084323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.084357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.651 [2024-12-09 05:20:49.084625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.651 [2024-12-09 05:20:49.084658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.651 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.084917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.084950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.085251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.085286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.085510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.085543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.085846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.085879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.086154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.086196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.086348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.086361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.086603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.086636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.086856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.086889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.087178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.087214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.087502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.087536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.087821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.087855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.088073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.088108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.088248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.088281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.088538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.088571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.088829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.088862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.089168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.089203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.089474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.089507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.089759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.089772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.089939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.089952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.090200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.090213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.090470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.090483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.090720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.090754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.090951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.090985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.091270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.091304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.091445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.091477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.091678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.091712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.091996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.092042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.092233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.092266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.092539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.092572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.092785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.092819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.093123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.093159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.093436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.093449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.093755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.093789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.094085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.094120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.094323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.094357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.094638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.094671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.094963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.094996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.095214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.095248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.095529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.095561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.095763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.095797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.096088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.096122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.096324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.096357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.096641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.096675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.096885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.096918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.097113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.097155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.097415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.097446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.097627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.097638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.097879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.097910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.098118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.098154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.098347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.098381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.098620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.098633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.098784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.098797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.099038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.099074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.099354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.099389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.099673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.099705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.099970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.100015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.100253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.100287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.100499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.100512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.100804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.100839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.652 [2024-12-09 05:20:49.101042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.652 [2024-12-09 05:20:49.101078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.652 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.101375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.101413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.101681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.101696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.101943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.101956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.102185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.102199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.102421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.102454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.102689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.102722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.102987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.103041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.103212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.103246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.103464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.103499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.103647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.103687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.103875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.103889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.104040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.104053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.104227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.104266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.104404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.104439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.104647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.104682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.104881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.104917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.105184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.105219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.105413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.105448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.105715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.105727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.105962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.105974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.106142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.106156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.106406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.106441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.106646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.106680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.106880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.106914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.107150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.107191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.107472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.107485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.107723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.107736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.107844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.107858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.108036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.108070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.108362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.108396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.108590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.108632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.108863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.108877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.109103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.109118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.109269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.109282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.109447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.109482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.109628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.109661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.109960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.109992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.110151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.110186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.110479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.110513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.110791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.110826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.111039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.111076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.111290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.111324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.111630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.111664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.111934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.111968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.112264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.112339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.112574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.112611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.112900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.112935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.113241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.113278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.113547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.113581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.113726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.113760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.113978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.114024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.114386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.114459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.114691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.114709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.114899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.114934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.115138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.115183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.115450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.115484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.653 [2024-12-09 05:20:49.115717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.653 [2024-12-09 05:20:49.115750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.653 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.116022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.116060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.116261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.116295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.116498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.116534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.116780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.116797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.117046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.117082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.117354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.117386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.117512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.117530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.117711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.117745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.118040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.118075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.118353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.118387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.118610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.118652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.118819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.118835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.119027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.119063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.119293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.119328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.119561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.119595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.119879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.119912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.120107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.120142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.120347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.120379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.120511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.120546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.120749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.120766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.121026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.121061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.121348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.121387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.121665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.121682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.121839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.121857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.122029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.122064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.122323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.122356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.122646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.122680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.122968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.123009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.123167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.123202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.123350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.123385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.123644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.123678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.123881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.123914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.124185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.124221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.124514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.124551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.124831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.124866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.125086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.125145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.125386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.125432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.125596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.125612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.125841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.125875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.126015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.126051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.126263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.126297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.126527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.126562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.126750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.126783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.127046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.127082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.127359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.127393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.127557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.127589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.127849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.127868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.128050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.128068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.128300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.128320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.128489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.128507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.128622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.128653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.128851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.128883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.129095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.129130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.129333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.129372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.129580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.129616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.129837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.129853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.130049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.130085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.130281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.130316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.130509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.130542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.130751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.130784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.131019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.131060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.131284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.131317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.131559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.131600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.654 qpair failed and we were unable to recover it. 00:26:12.654 [2024-12-09 05:20:49.131796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-12-09 05:20:49.131813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.132017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.132035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.132272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.132305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.132590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.132624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.132754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.132771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.132878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.132895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.133145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.133184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.133347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.133381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.133638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.133672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.133847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.133864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.134044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.134079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.134281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.134313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.134507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.134548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.134783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.134817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.135017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.135053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.135248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.135283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.135568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.135602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.135900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.135933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.136131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.136168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.136400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.136418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.136594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.136627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.136819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.136852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.137131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.137167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.137361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.137394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.137616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.137649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.137856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.137874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.138082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.138118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.138327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.138362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.138622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.138657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.138910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.138927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.139097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.139114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.139289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.139307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.139553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.139571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.139765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.139799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.140015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.140051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.140357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.140390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.140597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.140631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.140847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.140865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.141094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.141129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.141341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.141374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.141523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.141556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.141843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.141876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.142138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.142173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.142378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.142410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.142570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.142604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.142814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.142830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.143090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.143125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.143351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.143385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.143575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.143609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.143768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.143806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.143984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.144013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.144254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.144272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.144462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.144479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.144758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.144835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.145153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.145193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.145432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.145466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.145776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.145804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.145962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.145976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.146224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.146260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.146479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.146512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.146797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.146832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.655 qpair failed and we were unable to recover it. 00:26:12.655 [2024-12-09 05:20:49.147093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.655 [2024-12-09 05:20:49.147128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.147441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.147474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.147635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.147668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.147878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.147913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.148140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.148176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.148405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.148448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.148593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.148627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.148831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.148866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.149066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.149102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.149244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.149278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.149602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.149634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.149853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.149887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.150154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.150190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.150465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.150498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.150630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.150665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.150917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.150930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.151021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.151033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.151341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.151375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.151596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.151631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.151779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.151822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.151988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.152004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.152218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.152233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.152426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.152439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.152625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.152658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.152817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.152850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.153110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.153144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.153367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.153402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.153661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.153695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.153891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.153924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.154143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.154180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.154422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.154456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.154726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.154762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.154967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.155009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.155275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.155309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.155573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.155606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.155763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.155798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.156052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.156066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.156344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.156377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.156689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.156722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.156861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.156893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.157111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.157148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.157385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.157420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.157635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.157668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.157866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.157879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.158053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.158068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.158296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.158309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.158550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.158564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.158648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.158660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.158837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.158849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.159044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.159080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.159371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.159406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.159687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.159722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.159915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.159928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.160143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.160157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.160313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.160347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.160638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.160674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.160822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.656 [2024-12-09 05:20:49.160855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.656 qpair failed and we were unable to recover it. 00:26:12.656 [2024-12-09 05:20:49.161145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.161182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.161345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.161379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.161659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.161701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.161875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.161888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.162110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.162147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.162374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.162410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.162691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.162725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.162940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.162953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.163218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.163232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.163394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.163428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.163764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.163797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.164015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.164030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.164252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.164267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.164372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.164406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.164693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.164729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.165040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.165083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.165319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.165366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.165580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.165616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.165879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.165914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.166230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.166266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.166551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.166565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.166854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.166890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.167156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.167192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.167500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.167533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.167745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.167780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.168022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.168058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.168192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.168203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.168406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.168420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.168586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.168599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.168749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.168764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.168922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.168936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.169184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.169219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.169428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.169463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.169589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.169623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.169911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.169945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.170157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.170193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.170408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.170440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.170652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.170686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.170861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.170895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.171100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.171135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.171420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.171465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.171727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.171740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.171960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.171995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.172216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.172249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.172535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.172569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.172832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.172866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.173071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.173106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.173315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.173352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.173478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.173492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.173656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.173689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.173839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.173874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.174138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.174174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.174338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.174370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.174631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.174665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.174954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.174987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.175193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.175232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.175378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.175412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.175606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.175640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.175783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.175817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.176076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.176111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.176398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.176432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.176634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.176667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.176883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.657 [2024-12-09 05:20:49.176896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.657 qpair failed and we were unable to recover it. 00:26:12.657 [2024-12-09 05:20:49.177129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.177164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.177374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.177409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.177577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.177611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.177822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.177857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.178066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.178110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.178282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.178295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.178418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.178451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.178712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.178747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.178967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.179010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.179209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.179243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.179458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.179493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.179629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.179662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.179914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.179948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.180272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.180308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.180590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.180623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.180838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.180851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.181089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.181103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.181324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.181357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.181486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.181520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.181809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.181843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.182149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.182163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.182270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.182282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.182444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.182479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.182737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.182772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.182985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.183026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.183184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.183218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.183447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.183480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.183680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.183713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.184005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.184018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.184116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.184128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.184364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.184378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.184537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.184550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.184704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.184745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.185038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.185074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.185302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.185335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.185545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.185579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.185845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.185878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.186075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.186109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.186385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.186417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.186561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.186574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.186662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.186674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.186793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.186828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.187043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.187080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.187240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.187274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.187493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.187529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.187724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.187759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.188048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.188061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.188228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.188241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.188427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.188461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.188741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.188776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.188910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.188949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.189051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.189062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.189247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.189295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.189557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.189592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.189852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.189886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.190172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.190206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.658 [2024-12-09 05:20:49.190494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.658 [2024-12-09 05:20:49.190528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.658 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.190669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.190703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.190931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.190944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.191116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.191129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.191216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.191228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.191429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.191462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.191694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.191727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.192018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.192055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.192337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.192370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.192633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.192666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.193020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.193056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.193346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.193381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.193650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.193684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.193971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.194018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.194220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.194255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.194484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.194519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.194778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.194816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.195066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.195101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.195389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.195426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.195668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.195681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.195902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.195914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.196081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.196119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.196344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.196390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.196620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.196662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.196887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.196905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.197012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.197045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.197237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.197251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.197488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.197522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.197725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.197759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.198022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.198036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.198286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.198320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.198528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.198564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.198709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.198723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.198949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.198963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.199213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.199228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.199389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.199402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.199653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.199687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.199906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.199939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.200143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.200178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.200344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.200377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.200637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.200672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.200939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.200973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.201188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.201223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.201493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.201540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.201658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.201671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.201849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.201881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.202083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.202118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.202392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.202433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.202597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.202610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.202786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.202819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.203049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.203086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.203225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.203260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.203569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.203603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.203866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.203901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.204127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.204163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.204369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.204403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.204649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.204691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.204899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.204913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.205160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.205196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.205489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.205524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.205701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.205714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.205916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.659 [2024-12-09 05:20:49.205949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.659 qpair failed and we were unable to recover it. 00:26:12.659 [2024-12-09 05:20:49.206241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.206278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.206437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.206471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.206760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.206794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.207060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.207074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.207244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.207257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.207508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.207522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.207617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.207652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.207956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.207990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.208295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.208331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.208603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.208636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.208828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.208864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.209148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.209184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.209471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.209509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.209618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.209631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.209879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.209912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.210117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.210155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.210444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.210478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.210672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.210684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.210830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.210843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.211142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.211179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.211331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.211367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.211658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.211672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.211909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.211922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.212095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.212110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.212223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.212256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.212545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.212580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.212860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.212894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.213090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.213126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.213370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.213404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.213613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.213647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.213938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.213973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.214266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.214301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.214533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.214568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.214755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.214770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.214881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.214897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.214978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.214990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.215213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.215227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.215480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.215517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.215782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.215818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.216109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.216144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.216283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.216318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.216607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.216643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.216782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.216807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.217023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.217037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.217282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.217316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.217595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.217630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.217911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.217924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.218083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.218120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.218363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.218397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.218626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.218660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.218854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.218890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.219028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.219064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.219339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.219374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.219634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.219669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.219893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.219926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.220122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.220158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.220389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.220431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.220587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.220623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.220883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.220917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.221113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.221147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.221445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.221482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.221707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.660 [2024-12-09 05:20:49.221733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.660 qpair failed and we were unable to recover it. 00:26:12.660 [2024-12-09 05:20:49.221955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.221991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.222235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.222269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.222533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.222570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.222862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.222898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.223176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.223190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.223343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.223356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.223540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.223574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.223767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.223802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.224018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.224053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.224252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.224286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.224480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.224514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.224805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.224819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.225066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.225083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.225254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.225268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.225495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.225530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.225821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.225835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.226074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.226088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.226265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.226280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.226433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.226467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.226623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.226656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.226790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.226824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.227087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.227123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.227387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.227423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.227714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.227748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.227955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.227991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.228194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.228231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.228501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.228537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.228753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.228790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.228941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.228986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.229119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.229135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.229282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.229295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.229532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.229570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.229876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.229912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.230112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.230127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.230230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.230242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.230456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.230470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.230629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.230642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.230862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.230896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.231038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.231073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.231276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.231312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.231534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.231569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.231713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.231748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.231977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.232023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.232268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.232283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.232527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.232542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.232713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.232726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.232916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.232951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.233222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.233258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.233494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.233527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.233854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.233888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.234101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.234137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.234333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.234366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.234629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.234670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.234979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.234991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.235219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.235233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.235384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.661 [2024-12-09 05:20:49.235397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.661 qpair failed and we were unable to recover it. 00:26:12.661 [2024-12-09 05:20:49.235623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.235637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.235803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.235837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.236031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.236066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.236331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.236367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.236675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.236723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.236826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.236838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.237084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.237121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.237408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.237442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.237731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.237772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.238035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.238072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.238203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.238219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.238457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.238469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.238719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.238754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.239010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.239046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.239348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.239383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.239670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.239704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.239965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.240010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.240307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.240341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.240548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.240581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.240779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.240814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.241098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.241113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.241264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.241277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.241520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.241534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.241726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.241740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.241923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.241953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.242131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.242146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.242346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.242359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.242618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.242652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.242808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.242842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.243035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.243073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.243348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.243383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.243521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.243554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.243870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.243908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.244054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.244069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.244238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.244251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.244415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.244429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.244612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.244659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.244875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.244914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.245135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.245173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.245412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.245453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.245781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.245820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.246032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.246046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.246290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.246304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.246415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.246431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.246675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.246713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.246982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.247037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.247319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.247366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.247512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.247547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.247767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.247804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.248088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.248104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.248358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.248373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.248540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.248554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.248771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.248784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.249020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.249035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.249226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.249240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.249359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.249373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.249561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.249598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.249818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.249833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.250074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.250113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.250377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.250414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.254183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.254201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.662 [2024-12-09 05:20:49.254462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.662 [2024-12-09 05:20:49.254497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.662 qpair failed and we were unable to recover it. 00:26:12.663 [2024-12-09 05:20:49.254744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.663 [2024-12-09 05:20:49.254780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.663 qpair failed and we were unable to recover it. 00:26:12.663 [2024-12-09 05:20:49.255067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.663 [2024-12-09 05:20:49.255082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.663 qpair failed and we were unable to recover it. 00:26:12.663 [2024-12-09 05:20:49.255247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.663 [2024-12-09 05:20:49.255260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.663 qpair failed and we were unable to recover it. 00:26:12.663 [2024-12-09 05:20:49.255427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.663 [2024-12-09 05:20:49.255441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.663 qpair failed and we were unable to recover it. 00:26:12.663 [2024-12-09 05:20:49.255605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.663 [2024-12-09 05:20:49.255619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.663 qpair failed and we were unable to recover it. 00:26:12.663 [2024-12-09 05:20:49.255791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.663 [2024-12-09 05:20:49.255806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.663 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.255970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.255984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.256141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.256155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.256331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.256345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.256519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.256534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.256712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.256726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.256897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.256911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.257085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.257103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.257328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.257341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.257458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.257475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.257642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.257657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.257752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.257765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.257850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.257862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.257968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.257980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.258142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.258156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.258304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.258318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.258508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.258523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.258784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.258799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.258902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.258915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.259135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.259150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.259388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.259403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.259629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.259642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.259804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.259818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.260046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.260060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.260212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.260227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.260312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.260325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.260512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.260528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.260697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.260710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.260875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.260889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.261052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.261066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.261265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.261301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.261524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.261559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.261825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.261858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.262062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.262100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.262269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.262303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.262547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.939 [2024-12-09 05:20:49.262583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.939 qpair failed and we were unable to recover it. 00:26:12.939 [2024-12-09 05:20:49.262786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.262864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.263116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.263136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.263316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.263354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.263642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.263678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.263892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.263926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.264164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.264203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.264421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.264455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.264673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.264707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.264919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.264955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.265163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.265202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.265507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.265556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.265783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.265818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.266078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.266115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.266388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.266431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.266640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.266675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.266928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.266963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.267193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.267229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.267499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.267533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.267784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.267820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.267955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.267974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.268182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.268200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.268413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.268430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.268736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.268773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.268926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.268962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.269131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.269176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.269339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.269373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.269604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.269643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.269874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.269889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.270069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.270106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.270301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.270345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.270489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.270527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.270824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.270860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.271014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.271051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.271268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.271304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.271524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.271558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.271701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.271738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.271993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.272042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.272234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.940 [2024-12-09 05:20:49.272250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.940 qpair failed and we were unable to recover it. 00:26:12.940 [2024-12-09 05:20:49.272524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.272559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.272856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.272893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.273166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.273183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.273343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.273358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.273530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.273565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.273857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.273894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.274047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.274061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.274192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.274206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.274371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.274387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.274592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.274628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.274787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.274828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.275036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.275072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.275344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.275378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.275672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.275707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.276006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.276041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.276211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.276253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.276557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.276591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.276776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.276811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.276956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.276991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.277317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.277350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.277626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.277660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.277931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.277968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.278190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.278209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.278342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.278359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.278541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.278561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.278737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.278774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.278983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.279032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.279268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.279303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.279534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.279568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.279883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.279902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.280091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.280112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.280282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.280318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.280525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.280561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.280801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.280820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.281060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.281096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.281369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.281405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.281713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.281746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.282037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.941 [2024-12-09 05:20:49.282074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.941 qpair failed and we were unable to recover it. 00:26:12.941 [2024-12-09 05:20:49.282366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.282400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.282686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.282720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.282917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.282962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.283210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.283227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.283397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.283415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.283576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.283595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.283759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.283776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.283965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.284009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.284161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.284196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.284402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.284435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.284656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.284691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.285018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.285054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.285247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.285265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.285503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.285538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.285772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.285806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.286032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.286069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.286264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.286281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.286428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.286463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.286666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.286700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.286939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.286974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.287226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.287261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.287537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.287572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.287870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.287904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.288113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.288149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.288296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.288331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.288633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.288668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.288965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.289010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.289194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.289212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.289392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.289425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.289653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.289688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.289970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.290015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.290322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.290340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.290525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.290543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.290725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.290760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.291046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.291064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.291245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.291262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.291457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.291492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.942 [2024-12-09 05:20:49.291786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.942 [2024-12-09 05:20:49.291820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.942 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.292042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.292060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.292298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.292315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.292549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.292583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.292838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.292872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.293208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.293244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.293447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.293482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.293826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.293867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.294075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.294110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.294320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.294355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.294638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.294672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.294894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.294929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.295150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.295168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.295429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.295462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.295755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.295790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.296009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.296027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.296334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.296369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.296657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.296690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.296984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.297027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.297309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.297344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.297631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.297667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.297961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.297995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.298210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.298244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.298481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.298516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.298813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.298847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.299070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.299107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.299339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.299373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.299602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.299636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.299785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.299827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.300066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.300084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.300340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.300380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.300676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.300712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.301018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.943 [2024-12-09 05:20:49.301055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.943 qpair failed and we were unable to recover it. 00:26:12.943 [2024-12-09 05:20:49.301309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.301344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.301656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.301691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.301961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.301979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.302228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.302246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.302481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.302498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.302733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.302750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.302920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.302937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.303193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.303229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.303457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.303491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.303798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.303832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.304066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.304102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.304321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.304355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.304644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.304680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.304988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.305031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.305188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.305229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.305521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.305556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.305779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.305814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.306104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.306142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.306440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.306475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.306761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.306794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.307090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.307128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.307263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.307297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.307497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.307530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.307827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.307869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.308165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.308203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.308432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.308470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.308675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.308722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.308993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.309018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.309210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.309228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.309332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.309349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.309634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.309654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.309837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.309856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.310026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.310045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.310232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.310250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.310440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.310457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.310621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.310639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.310900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.310938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.311081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-12-09 05:20:49.311117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.944 qpair failed and we were unable to recover it. 00:26:12.944 [2024-12-09 05:20:49.311329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.311363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.311648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.311683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.311974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.312019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.312229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.312262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.312480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.312513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.312726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.312760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.313041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.313079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.313375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.313408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.313681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.313715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.314031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.314067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.314361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.314396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.314694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.314738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.314933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.314951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.315114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.315132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.315310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.315345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.315548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.315581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.315874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.315916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.316211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.316247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.316465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.316500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.316788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.316823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.317109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.317128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.317364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.317399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.317640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.317674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.317941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.317975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.318225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.318260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.318404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.318439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.318710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.318756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.319040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.319076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.319277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.319312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.319602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.319636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.319912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.319947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.320255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.320290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.320509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.320543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.320763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.320797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.321104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.321141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.321350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.321367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.321609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.321626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.321796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-12-09 05:20:49.321831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.945 qpair failed and we were unable to recover it. 00:26:12.945 [2024-12-09 05:20:49.322032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.322069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.322362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.322397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.322683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.322717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.322926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.322943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.323110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.323128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.323394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.323429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.323591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.323625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.323863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.323899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.324190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.324227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.324450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.324485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.324649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.324684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.324982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.325006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.325185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.325202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.325461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.325496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.325626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.325660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.325977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.326022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.326339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.326373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.326644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.326679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.326826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.326865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.327185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.327222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.327434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.327451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.327689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.327723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.328045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.328080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.328365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.328383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.328571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.328605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.328891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.328925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.329148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.329167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.329330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.329348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.329609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.329642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.329917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.329952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.330193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.330211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.330509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.330543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.330769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.330803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.331046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.331083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.331365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.331399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.331698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.331732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.332010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.332029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.332268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.332285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.946 qpair failed and we were unable to recover it. 00:26:12.946 [2024-12-09 05:20:49.332541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-12-09 05:20:49.332586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.332902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.332936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.333232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.333250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.333431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.333466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.333762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.333796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.334091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.334128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.334422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.334456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.334688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.334723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.334967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.335023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.335259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.335277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.335534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.335552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.335842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.335877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.336013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.336031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.336279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.336315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.336532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.336566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.336836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.336871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.337085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.337121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.337339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.337356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.337453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.337469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.337703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.337720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.337979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.338007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.338177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.338194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.338383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.338400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.338550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.338585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.338735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.338769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.339024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.339060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.339377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.339412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.339623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.339657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.339925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.339960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.340292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.340327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.340621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.340656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.340862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.340896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.341164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.341199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.341473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.341507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.341726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.341761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.342055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.342091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.342363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.342398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.342671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.342705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.947 qpair failed and we were unable to recover it. 00:26:12.947 [2024-12-09 05:20:49.342934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.947 [2024-12-09 05:20:49.342952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.343122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.343158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.343449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.343483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.343681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.343715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.344014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.344051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.344326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.344343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.344525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.344560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.344832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.344867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.345160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.345179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.345366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.345384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.345664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.345681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.345889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.345907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.346105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.346123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.346423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.346458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.346689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.346722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.346933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.346967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.347221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.347257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.347397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.347432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.347729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.347763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.348049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.348070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.348384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.348417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.348758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.348795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.349084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.349108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.349346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.349363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.349542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.349561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.349680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.349698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.349937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.349974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.350276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.350312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.350582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.350601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.350837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.350855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.351028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.351047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.351254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.351289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.351623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.351656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.351980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.352025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.948 [2024-12-09 05:20:49.352318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.948 [2024-12-09 05:20:49.352337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.948 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.352524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.352543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.352806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.352845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.353166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.353203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.353435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.353470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.353627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.353661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.353961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.353996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.354142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.354159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.354369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.354387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.354622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.354641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.354816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.354834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.355088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.355107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.355313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.355332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.355544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.355579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.355796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.355831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.356117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.356136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.356312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.356330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.356512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.356531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.356665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.356698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.357014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.357051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.357276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.357314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.357570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.357604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.357912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.357946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.358217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.358253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.358525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.358560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.358772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.358806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.359073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.359094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.359267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.359304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.359533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.359575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.359875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.359909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.360074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.360113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.360314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.360334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.360519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.360553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.360682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.360717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.360990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.361038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.361251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.361269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.361535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.361570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.361779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.361816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.949 [2024-12-09 05:20:49.362018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.949 [2024-12-09 05:20:49.362037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.949 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.362221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.362239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.362424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.362444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.362626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.362663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.362910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.362949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.363151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.363170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.363344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.363363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.363626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.363661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.363808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.363844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.364051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.364070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.364181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.364201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.364470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.364489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.364668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.364688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.364859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.364878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.365067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.365087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.365217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.365236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.365424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.365442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.365762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.365850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.366095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.366137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.366450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.366485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.366786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.366820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.367039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.367076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.367302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.367331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.367573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.367612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.367829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.367865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.368119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.368135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.368303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.368339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.368500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.368535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.368846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.368881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.369042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.369079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.369363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.369410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.369614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.369649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.369968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.370013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.370242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.370258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.370463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.370499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.370713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.370749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.370915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.370952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.371160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.371195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.950 [2024-12-09 05:20:49.371440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.950 [2024-12-09 05:20:49.371477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.950 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.371751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.371787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.372091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.372128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.372293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.372330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.372607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.372642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.372870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.372904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.373185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.373223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.373372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.373408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.373654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.373688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.374017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.374053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.374346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.374384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.374606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.374633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.374804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.374819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.375053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.375069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.375183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.375218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.375508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.375544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.375757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.375792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.375922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.375960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.376151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.376166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.376410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.376424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.376693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.376708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.376958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.376972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.377132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.377148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.377273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.377305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.377516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.377552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.377762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.377797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.378067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.378103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.378398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.378432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.378719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.378754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.378957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.378994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.379232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.379248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.379429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.379464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.379777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.379819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.380099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.380137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.380430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.380466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.380666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.380703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.380949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.380983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.381212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.381248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.381466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.951 [2024-12-09 05:20:49.381502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.951 qpair failed and we were unable to recover it. 00:26:12.951 [2024-12-09 05:20:49.381705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.381740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.382041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.382078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.382372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.382407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.382693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.382728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.382951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.382986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.383297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.383332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.383627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.383662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.383885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.383921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.384160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.384199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.384490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.384515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.384697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.384710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.384991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.385037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.385339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.385377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.385660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.385696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.385995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.386042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.386207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.386242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.386452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.386486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.386641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.386677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.386878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.386912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.387231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.387276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.387520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.387570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.387836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.387879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.388192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.388231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.388530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.388565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.388887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.388921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.389132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.389152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.389262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.389278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.389466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.389483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.389658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.389681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.389919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.389954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.390100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.390137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.390462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.390497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.390714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.390749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.391010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.952 [2024-12-09 05:20:49.391056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.952 qpair failed and we were unable to recover it. 00:26:12.952 [2024-12-09 05:20:49.391272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.391307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.391583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.391601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.391838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.391857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.392102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.392123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.392413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.392447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.392749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.392786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.393062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.393099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.393373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.393408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.393646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.393680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.394022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.394041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.394261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.394281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.394525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.394562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.394785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.394822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.395110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.395129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.395299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.395318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.395435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.395453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.395653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.395673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.395926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.395965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.396329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.396413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.396736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.396777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.396996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.397056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.397179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.397199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.397312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.397332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.397545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.397582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.397866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.397902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.398111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.398150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.398467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.398518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.398805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.398840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.399137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.399173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.399454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.399490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.399805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.399839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.400049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.400068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.400317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.400352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.400551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.400587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.400737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.400772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.400996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.401041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.401264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.401299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.401541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.401559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.953 qpair failed and we were unable to recover it. 00:26:12.953 [2024-12-09 05:20:49.401799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.953 [2024-12-09 05:20:49.401818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.401985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.402011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.402196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.402213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.402474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.402509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.402776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.402812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.403042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.403078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.403380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.403415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.403617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.403651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.403859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.403877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.404139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.404157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.404334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.404351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.404659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.404693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.404966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.405017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.405247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.405264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.405495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.405512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.405629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.405650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.405838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.405856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.406092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.406111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.406343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.406378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.406579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.406613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.406814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.406848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.407145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.407181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.407390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.407425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.407696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.407730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.407953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.407987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.408233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.408267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.408480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.408498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.408681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.408699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.408851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.408869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.409069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.409105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.409379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.409413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.409711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.409745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.410031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.410066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.410276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.410310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.410461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.410495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.410771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.410805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.411103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.411139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.411352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.411386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.411677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.954 [2024-12-09 05:20:49.411712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.954 qpair failed and we were unable to recover it. 00:26:12.954 [2024-12-09 05:20:49.412037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.412073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.412338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.412372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.412655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.412696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.412977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.413005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.413285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.413320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.413561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.413596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.413795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.413830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.414068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.414103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.414332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.414366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.414648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.414683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.414901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.414935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.415249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.415267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.415452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.415486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.415763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.415799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.416076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.416112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.416329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.416363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.416631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.416667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.416970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.417012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.417174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.417210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.417485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.417521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.417825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.417860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.418147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.418183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.418475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.418510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.418706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.418742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.418950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.418996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.419265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.419301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.419542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.419559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.419818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.419853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.420198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.420243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.420502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.420519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.420744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.420779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.421109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.421145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.421404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.421439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.421750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.421783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.422011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.422048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.422351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.422369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.422538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.422556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.422737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.422770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.955 qpair failed and we were unable to recover it. 00:26:12.955 [2024-12-09 05:20:49.423062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.955 [2024-12-09 05:20:49.423097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.423245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.423262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.423450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.423484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.423704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.423738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.424041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.424075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.424260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.424279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.424449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.424488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.424781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.424815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.425098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.425134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.425361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.425396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.425698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.425733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.426019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.426055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.426274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.426308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.426492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.426527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.426682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.426718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.426915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.426948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.427138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.427173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.427456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.427492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.427741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.427775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.428055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.428073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.428309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.428327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.428606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.428639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.428908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.428941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.429242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.429278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.429555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.429589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.429881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.429916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.430210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.430247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.430526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.430544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.430752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.430770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.430874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.430891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.431125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.431144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.431405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.431438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.431580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.431613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.431910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.431950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.432183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.432219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.432420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.432455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.432721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.432755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.432976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.433022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.433236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.956 [2024-12-09 05:20:49.433271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.956 qpair failed and we were unable to recover it. 00:26:12.956 [2024-12-09 05:20:49.433563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.433598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.433749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.433784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.434084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.434132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.434388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.434430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.434628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.434662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.434807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.434841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.435151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.435187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.435406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.435440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.435642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.435677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.435890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.435925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.436160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.436195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.436390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.436424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.436638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.436655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.436820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.436838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.437100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.437136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.437374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.437408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.437605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.437640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.437873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.437908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.438115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.438151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.438380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.438414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.438729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.438763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.439059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.439101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.439382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.439417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.439714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.439747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.439976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.440027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.440281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.440299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.440539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.440573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.440782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.440817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.441085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.441122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.441456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.441490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.441651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.441686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.441909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.441942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.442156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.442191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.442532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.957 [2024-12-09 05:20:49.442566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.957 qpair failed and we were unable to recover it. 00:26:12.957 [2024-12-09 05:20:49.442783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.442817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.443028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.443063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.443336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.443370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.443638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.443655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.443857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.443874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.444043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.444079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.444371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.444389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.444622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.444639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.444747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.444764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.445029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.445066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.445336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.445371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.445565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.445582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.445842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.445876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.446170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.446205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.446488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.446507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.446719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.446737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.446906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.446924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.447122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.447158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.447377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.447413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.447707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.447741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.447886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.447921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.448210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.448230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.448414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.448448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.448648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.448683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.448894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.448929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.449126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.449161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.449437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.449471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.449746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.449765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.450026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.450074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.450286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.450321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.450607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.450641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.450893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.450929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.451150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.451187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.451479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.451514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.451663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.451697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.451989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.452030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.452267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.452285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.452457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.452475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.958 qpair failed and we were unable to recover it. 00:26:12.958 [2024-12-09 05:20:49.452735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.958 [2024-12-09 05:20:49.452752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.453026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.453044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.453331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.453350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.453593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.453611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.453859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.453902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.454176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.454212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.454430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.454465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.454732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.454767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.454990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.455039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.455252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.455285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.455555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.455590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.455861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.455896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.456209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.456245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.456462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.456506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.456691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.456709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.457015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.457050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.457375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.457410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.457600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.457640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.457934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.457968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.458271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.458351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.458692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.458771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.459070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.459162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.459464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.459501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.459797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.459832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.460066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.460103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.460436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.460471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.460676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.460710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.460976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.461022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.461180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.461215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.461423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.461458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.461744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.461762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.461950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.461967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.462152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.462170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.462311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.462345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.462632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.462667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.462951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.462985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.463205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.463240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.463481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.463516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.463834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.959 [2024-12-09 05:20:49.463867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.959 qpair failed and we were unable to recover it. 00:26:12.959 [2024-12-09 05:20:49.464037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.464074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.464322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.464357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.464669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.464702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.464919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.464953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.465183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.465218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.465342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.465382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.465587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.465622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.465835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.465870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.466167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.466203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.466487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.466522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.466823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.466858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.467141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.467176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.467475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.467510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.467788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.467823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.468048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.468084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.468353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.468371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.468605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.468622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.468853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.468872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.469124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.469165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.469373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.469408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.469610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.469644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.469855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.469890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.470096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.470133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.470342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.470360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.470622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.470657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.470978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.471022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.471177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.471211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.471505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.471547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.471808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.471855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.472155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.472192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.472466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.472499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.472660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.472695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.472892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.472933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.473160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.473197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.473415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.473449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.473712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.473730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.473963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.473981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.474252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.474270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.960 [2024-12-09 05:20:49.474526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.960 [2024-12-09 05:20:49.474544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.960 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.474824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.474842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.474934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.474950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.475211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.475230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.475372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.475390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.475653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.475686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.475975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.476029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.476306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.476339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.476534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.476552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.476802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.476820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.476992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.477019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.477134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.477150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.477455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.477489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.477708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.477742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.477942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.477977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.478217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.478251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.478554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.478598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.478857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.478900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.479189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.479225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.479518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.479552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.479839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.479875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.480162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.480198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.480446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.480482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.480743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.480762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.480929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.480947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.481205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.481223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.481504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.481521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.481693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.481711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.481948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.481982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.482209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.482245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.482544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.482578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.482858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.482892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.483192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.483228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.483447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.483480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.483777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.483795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.961 [2024-12-09 05:20:49.484062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.961 [2024-12-09 05:20:49.484080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.961 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.484263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.484280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.484532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.484550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.484784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.484802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.484973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.484991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.485107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.485142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.485371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.485405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.485628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.485662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.485928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.485963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.486186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.486222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.486435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.486468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.486682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.486699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.486875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.486910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.487064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.487100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.487276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.487311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.487615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.487649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.487850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.487885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.488103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.488139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.488410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.488444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.488660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.488678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.488796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.488813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.489056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.489092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.489321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.489355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.489632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.489668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.489962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.490007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.490214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.490257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.490490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.490508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.490785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.490825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.491042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.491079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.491299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.491334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.491546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.491564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.491734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.491770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.491980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.492040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.492247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.492281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.492575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.492610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.492907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.492941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.493270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.493306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.493598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.493633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.493902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.962 [2024-12-09 05:20:49.493936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.962 qpair failed and we were unable to recover it. 00:26:12.962 [2024-12-09 05:20:49.494168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.494204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.494498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.494534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.494738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.494772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.495050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.495087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.495365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.495399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.495636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.495655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.495848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.495866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.496150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.496168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.496368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.496402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.496641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.496676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.496896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.496931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.497086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.497122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.497363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.497399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.497663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.497681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.497933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.497951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.498129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.498150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.498422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.498456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.498656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.498690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.498967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.499015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.499253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.499288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.499569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.499604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.499813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.499831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.500091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.500111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.500313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.500331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.500438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.500456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.500687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.500721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.500964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.501008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.501283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.501317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.501584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.501619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.501784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.501818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.502038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.502074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.502372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.502406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.502704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.502722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.502943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.502977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.503228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.503263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.503436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.503470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.503762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.503780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.503950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.503968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.963 [2024-12-09 05:20:49.504164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.963 [2024-12-09 05:20:49.504200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.963 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.504341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.504375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.504531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.504567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.504854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.504872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.505178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.505214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.505442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.505477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.505684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.505720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.505991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.506038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.506333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.506367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.506575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.506610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.506805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.506839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.507058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.507095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.507331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.507350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.507590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.507626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.507839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.507873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.508083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.508120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.508336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.508370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.508674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.508709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.509021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.509059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.509259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.509294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.509567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.509602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.509815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.509834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.510011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.510031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.510267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.510286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.510472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.510492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.510731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.510749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.510938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.510972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.511217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.511252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.511569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.511604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.511846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.511881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.512174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.512219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.512335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.512351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.512521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.512540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.512710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.512727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.512990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.513034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.513323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.513358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.513567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.513585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.513819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.513854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.514055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.964 [2024-12-09 05:20:49.514092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.964 qpair failed and we were unable to recover it. 00:26:12.964 [2024-12-09 05:20:49.514374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.514408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.514627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.514662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.514931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.514966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.515245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.515281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.515501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.515536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.515808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.515843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.516064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.516106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.516312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.516347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.516573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.516591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.516788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.516806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.517098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.517135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.517298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.517333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.517529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.517564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.517779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.517798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.517975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.517993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.518253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.518271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.518434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.518453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.518642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.518659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.518907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.518925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.519204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.519239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.519556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.519592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.519868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.519903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.520040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.520076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.520277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.520312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.520629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.520664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.520956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.520990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.521317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.521354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.521651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.521686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.521891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.521926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.522142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.522178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.522448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.522483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.522796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.522814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.523005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.523023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.523258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.523279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.523496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.523531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.523804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.523841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.524141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.524177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.524451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.524486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.524788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.965 [2024-12-09 05:20:49.524824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.965 qpair failed and we were unable to recover it. 00:26:12.965 [2024-12-09 05:20:49.525098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.525117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.525381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.525416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.525733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.525769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.525987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.526032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.526209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.526245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.526534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.526552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.526714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.526733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.526916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.526952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.527241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.527278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.527583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.527601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.527804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.527821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.527996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.528043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.528273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.528309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.528603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.528638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.528794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.528828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.529050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.529087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.529386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.529421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.529753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.529786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.529993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.530039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.530323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.530357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.530583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.530617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.530907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.530929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.531115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.531133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.531295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.531313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.531598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.531632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.531946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.531981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.532249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.532284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.532500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.532535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.532825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.532860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.533149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.533186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.533424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.533460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.533754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.533788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.534087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.534123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.534347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.534381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.966 [2024-12-09 05:20:49.534674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.966 [2024-12-09 05:20:49.534709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.966 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.534869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.534888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.535143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.535162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.535441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.535459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.535585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.535603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.535790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.535808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.536073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.536109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.536271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.536306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.536597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.536632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.536872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.536908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.537110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.537146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.537438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.537472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.537775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.537809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.538089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.538125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.538346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.538380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.538591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.538625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.538852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.538871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.539132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.539167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.539507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.539541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.539780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.539816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.540110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.540146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.540427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.540462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.540624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.540658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.540948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.540991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.541239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.541273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.541542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.541576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.541887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.541905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.542151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.542169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.542363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.542381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.542649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.542666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.542842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.542877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.543086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.543123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.543395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.543430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.543654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.543688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.543956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.543974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.544230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.544270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.544544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.544579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.544884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.544918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.545198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.967 [2024-12-09 05:20:49.545234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.967 qpair failed and we were unable to recover it. 00:26:12.967 [2024-12-09 05:20:49.545526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.545560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.545844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.545879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.546105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.546141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.546377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.546412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.546706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.546740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.546981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.547024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.547335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.547370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.547669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.547704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.548023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.548059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.548364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.548400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.548694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.548729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.549019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.549054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.549272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.549306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.549548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.549592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.549847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.549865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.549973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.549989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.550263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.550305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.550524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.550558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.550839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.550874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.551145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.551182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.551337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.551371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.551666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.551701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.551980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.552029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.552353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.552388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.552704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.552738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.553025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.553061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.553308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.553342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.553628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.553663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.553956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.553989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.554223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.554258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.554554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.554600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.554889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.554924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.555127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.555163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.555478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.555512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.555740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.555775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.555989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.556051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.556185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.556220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.556516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.556560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.968 [2024-12-09 05:20:49.556745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.968 [2024-12-09 05:20:49.556764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.968 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.557005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.557023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.557185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.557204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.557330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.557365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.557585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.557621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.557893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.557935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.558235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.558270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.558481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.558517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.558812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.558846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.559135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.559172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.559461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.559496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.559791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.559831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.560004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.560022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.560206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.560225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.560483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.560518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.560719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.560753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.560925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.560960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.561260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.561296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.561586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.561604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.561736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.561754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.561872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.561890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.562080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.562099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.562331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.562349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.562553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.562571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.562739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.562774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.563045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.563081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.563299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.563334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.563586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.563620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.563875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.563893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.564100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.564136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.564374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.564408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.564692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.564710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.564816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.564835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.565010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.565028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.565230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.565249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:12.969 [2024-12-09 05:20:49.565354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.969 [2024-12-09 05:20:49.565371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:12.969 qpair failed and we were unable to recover it. 00:26:13.252 [2024-12-09 05:20:49.565623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.252 [2024-12-09 05:20:49.565641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.252 qpair failed and we were unable to recover it. 00:26:13.252 [2024-12-09 05:20:49.565915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.252 [2024-12-09 05:20:49.565934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.252 qpair failed and we were unable to recover it. 00:26:13.252 [2024-12-09 05:20:49.566046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.252 [2024-12-09 05:20:49.566063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.252 qpair failed and we were unable to recover it. 00:26:13.252 [2024-12-09 05:20:49.566319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.252 [2024-12-09 05:20:49.566337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.252 qpair failed and we were unable to recover it. 00:26:13.252 [2024-12-09 05:20:49.566513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.252 [2024-12-09 05:20:49.566530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.252 qpair failed and we were unable to recover it. 00:26:13.252 [2024-12-09 05:20:49.566721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.252 [2024-12-09 05:20:49.566739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.252 qpair failed and we were unable to recover it. 00:26:13.252 [2024-12-09 05:20:49.566850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.252 [2024-12-09 05:20:49.566867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.252 qpair failed and we were unable to recover it. 00:26:13.252 [2024-12-09 05:20:49.567076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.252 [2024-12-09 05:20:49.567094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.252 qpair failed and we were unable to recover it. 00:26:13.252 [2024-12-09 05:20:49.567355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.252 [2024-12-09 05:20:49.567374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.252 qpair failed and we were unable to recover it. 00:26:13.252 [2024-12-09 05:20:49.567550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.252 [2024-12-09 05:20:49.567568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.252 qpair failed and we were unable to recover it. 00:26:13.252 [2024-12-09 05:20:49.567848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.252 [2024-12-09 05:20:49.567867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.252 qpair failed and we were unable to recover it. 00:26:13.252 [2024-12-09 05:20:49.568110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.252 [2024-12-09 05:20:49.568128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.252 qpair failed and we were unable to recover it. 00:26:13.252 [2024-12-09 05:20:49.568362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.252 [2024-12-09 05:20:49.568380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.252 qpair failed and we were unable to recover it. 00:26:13.252 [2024-12-09 05:20:49.568505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.252 [2024-12-09 05:20:49.568523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.252 qpair failed and we were unable to recover it. 00:26:13.252 [2024-12-09 05:20:49.568710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.252 [2024-12-09 05:20:49.568727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.252 qpair failed and we were unable to recover it. 00:26:13.252 [2024-12-09 05:20:49.568905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.252 [2024-12-09 05:20:49.568923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.252 qpair failed and we were unable to recover it. 00:26:13.252 [2024-12-09 05:20:49.569166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.569184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.569444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.569462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.569763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.569797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.570020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.570055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.570272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.570307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.570455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.570489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.570690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.570724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.570976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.570993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.571172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.571191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.571469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.571488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.571670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.571689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.571925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.571959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.572184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.572220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.572496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.572540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.572721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.572738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.573023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.573059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.573379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.573414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.573613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.573647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.573851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.573887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.574105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.574141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.574436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.574470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.574755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.574795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.574930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.574965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.575276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.575311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.575519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.575554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.253 qpair failed and we were unable to recover it. 00:26:13.253 [2024-12-09 05:20:49.575754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.253 [2024-12-09 05:20:49.575800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.575995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.576018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.576303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.576321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.576574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.576592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.576873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.576891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.577091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.577109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.577344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.577362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.577609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.577627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.577803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.577821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.578073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.578092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.578345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.578384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.578661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.578695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.578926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.578962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.579266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.579301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.579570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.579612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.579797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.579815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.580104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.580139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.580362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.580396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.580614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.580648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.580936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.580954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.581188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.581206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.581370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.581388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.581562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.581580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.581699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.581720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.581980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.582038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.582357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.254 [2024-12-09 05:20:49.582391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.254 qpair failed and we were unable to recover it. 00:26:13.254 [2024-12-09 05:20:49.582684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.582718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.582960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.582994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.583251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.583286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.583591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.583625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.583851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.583885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.584111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.584159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.584416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.584460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.584661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.584695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.584905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.584940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.585167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.585204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.585442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.585477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.585728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.585763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.585891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.585910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.586175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.586212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.586412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.586447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.586643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.586677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.586954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.586971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.587215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.587234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.587428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.587446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.587611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.587628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.587826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.587844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.588126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.588161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.588435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.588470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.588628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.588662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.588951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.588972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.589159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.589178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.589413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.589430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.589675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.589693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.589979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.590026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.590270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.590305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.590619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.590653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.590930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.590964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.255 [2024-12-09 05:20:49.591272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.255 [2024-12-09 05:20:49.591307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.255 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.591586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.591620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.591767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.591801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.592093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.592128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.592326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.592361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.592662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.592698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.592935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.592953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.593209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.593227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.593346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.593362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.593620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.593655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.593946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.593981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.594310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.594345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.594615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.594650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.594853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.594887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.595093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.595129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.595344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.595378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.595673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.595707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.595920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.595955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.596172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.596208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.596501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.596535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.596733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.596752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.597017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.597036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.597279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.597313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.597590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.597624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.597925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.597960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.598268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.598305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.598588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.598624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.598845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.598879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.599106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.599143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.599414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.599448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.599720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.599739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.599936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.599954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.600170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.600189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.600359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.600377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.600518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.600554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.600683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.600716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.600981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.256 [2024-12-09 05:20:49.601041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.256 qpair failed and we were unable to recover it. 00:26:13.256 [2024-12-09 05:20:49.601319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.601355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.601661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.601695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.601967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.601987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.602170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.602188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.602430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.602465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.602697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.602718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.602968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.602988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.603166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.603185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.603398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.603433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.603660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.603694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.603975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.603995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.604257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.604274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.604530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.604548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.604732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.604766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.604978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.605028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.605332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.605368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.605540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.605573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.605778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.605796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.605967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.606014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.606313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.606348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.606641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.606678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.606966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.607015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.607174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.607209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.607491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.607531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.607757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.607802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.607969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.607989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.608178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.608196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.608429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.608447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.608550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.608566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.608843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.608877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.609081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.609116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.609412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.609447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.609727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.609746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.610028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.610066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.610377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.257 [2024-12-09 05:20:49.610411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.257 qpair failed and we were unable to recover it. 00:26:13.257 [2024-12-09 05:20:49.610708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.610746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.610975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.611023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.611255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.611291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.611512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.611547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.611765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.611800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.612070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.612108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.612291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.612326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.612464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.612481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.612588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.612607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.612861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.612899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.613130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.613166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.613404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.613439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.613736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.613773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.613913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.613931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.614120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.614138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.614381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.614404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.614687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.614706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.614965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.614983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.615240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.615260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.615529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.615564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.615808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.615843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.615996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.616059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.616301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.616321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.616444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.616463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.616693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.616711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.616941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.616960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.617146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.617185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.617415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.617450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.617614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.617649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.617900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.617934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.618245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.618281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.618417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.618452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.618726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.618762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.618915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.618933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.619135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.619154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.619274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.619293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.258 [2024-12-09 05:20:49.619499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.258 [2024-12-09 05:20:49.619518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.258 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.619721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.619741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.620010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.620029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.620267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.620285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.620545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.620564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.620825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.620860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.621096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.621134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.621370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.621404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.621627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.621662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.621894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.621931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.622204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.622243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.622479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.622515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.622805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.622840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.623085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.623104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.623277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.623311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.623557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.623594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.623893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.623911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.624093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.624111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.624382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.624417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.624572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.624609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.624909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.624953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.625123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.625142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.625304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.625322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.625521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.625539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.625641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.625658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.625897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.625932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.626180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.626218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.626464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.626506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.626598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.626614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.626820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.626857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.627082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.627118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.627329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.627363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.627571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.627607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.627823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.627856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.628088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.628107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.628375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.628410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.628682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.628718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.628965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.628984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.629264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.259 [2024-12-09 05:20:49.629301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.259 qpair failed and we were unable to recover it. 00:26:13.259 [2024-12-09 05:20:49.629612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.629648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.629926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.629962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.630190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.630226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.630429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.630465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.630763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.630798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.631099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.631135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.631398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.631433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.631702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.631721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.631991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.632021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.632266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.632284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.632398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.632417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.632610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.632647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.632937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.632971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.633195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.633229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.633366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.633401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.633694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.633730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.633930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.633964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.634191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.634228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.634435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.634469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.634732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.634750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.634951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.634970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.635161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.635180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.635377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.635411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.635630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.635665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.635935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.635970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.636226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.636262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.636465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.636499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.636783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.636803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.637009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.637027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.637218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.637239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.637519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.637538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.637717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.637753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.638034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.638071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.638243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.638278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.638563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.638600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.638785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.638807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.639013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.639049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.639330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.260 [2024-12-09 05:20:49.639368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.260 qpair failed and we were unable to recover it. 00:26:13.260 [2024-12-09 05:20:49.639663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.639699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.639909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.639927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.640115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.640151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.640324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.640360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.640494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.640531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.640739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.640774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.641045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.641080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.641378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.641415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.641625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.641661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.641861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.641896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.642186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.642206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.642320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.642356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.642580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.642614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.642886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.642920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.643159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.643178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.643444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.643462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.643663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.643681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.643847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.643867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.644031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.644068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.644302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.644338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.644557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.644592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.644818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.644854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.645119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.645155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.645331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.645349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.645607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.645628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.645889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.645934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.646060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.646097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.646326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.646362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.646564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.646600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.646865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.646885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.647140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.647159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.647419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.647465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.647684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.647720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.647940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.647986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.648182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.648200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.648400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.648420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.648606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.648625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.648817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.648852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.261 qpair failed and we were unable to recover it. 00:26:13.261 [2024-12-09 05:20:49.649179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.261 [2024-12-09 05:20:49.649215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.649450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.649486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.649758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.649803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.650063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.650081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.650322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.650341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.650514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.650533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.650717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.650752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.650912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.650949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.651120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.651156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.651369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.651401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.651538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.651573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.651813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.651848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.652060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.652099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.652399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.652436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.652581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.652615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.652882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.652900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.653106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.653125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.653318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.653353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.653623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.653659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.653880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.653916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.654211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.654249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.654402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.654438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.654658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.654693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.654939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.654975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.655195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.655215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.655307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.655323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.655557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.655576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.655869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.655964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.656334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.656416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.656664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.656705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.656913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.656951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.657184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.657204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.657442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.657461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.657694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.657730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.262 qpair failed and we were unable to recover it. 00:26:13.262 [2024-12-09 05:20:49.657946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.262 [2024-12-09 05:20:49.657981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.658271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.658307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.658519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.658555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.658771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.658805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.659104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.659140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.659350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.659387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.659682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.659743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.659906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.659925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.660104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.660141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.660339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.660374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.660611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.660648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.660908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.660928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.661035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.661054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.661236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.661273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.661426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.661463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.661752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.661789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.661980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.662005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.662266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.662286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.662475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.662493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.662732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.662750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.662949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.662969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.663175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.663193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.663322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.663357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.663631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.663667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.663961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.663979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.664170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.664189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.664351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.664371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.664559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.664577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.664755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.664775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.665017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.665036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.665292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.665311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.665428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.665445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.665707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.665741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.665965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.666014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.666312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.666347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.666671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.666706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.667001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.667020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.667158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.263 [2024-12-09 05:20:49.667177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.263 qpair failed and we were unable to recover it. 00:26:13.263 [2024-12-09 05:20:49.667342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.667383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.667529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.667566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.667781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.667814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.667948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.667984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.668231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.668249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.668425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.668459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.668617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.668653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.668854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.668889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.669176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.669212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.669433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.669469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.669639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.669674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.669980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.670026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.670309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.670349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.670555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.670590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.670746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.670763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.670977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.671023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.671198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.671234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.671443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.671477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.671681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.671718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.672017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.672054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.672324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.672344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.672638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.672656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.672848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.672885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.673101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.673137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.673352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.673388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.673667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.673704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.673911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.673931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.674192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.674210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.674391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.674409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.674534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.674569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.674734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.674770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.674988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.675033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.675241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.675275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.675413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.675446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.675698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.675733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.676018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.676061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.676330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.676347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.264 [2024-12-09 05:20:49.676512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.264 [2024-12-09 05:20:49.676530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.264 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.676716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.676751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.676953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.676987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.677303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.677339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.677575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.677609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.677778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.677812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.677966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.677983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.678190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.678225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.678370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.678406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.678606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.678640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.678858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.678893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.679139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.679157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.679331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.679350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.679472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.679490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.679763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.679798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.680070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.680106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.684014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.684055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.684264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.684281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.684546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.684565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.684664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.684680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.684932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.684950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.685263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.685284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.685418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.685435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.685705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.685725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.685974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.685995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.686258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.686282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.686482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.686500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.686691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.686710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.686902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.686934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.687050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.687065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.687186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.687199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.687381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.687398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.687597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.687613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.687808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.687825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.688027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.688047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.688232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.688251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.688474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.688493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.688748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.688769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.265 qpair failed and we were unable to recover it. 00:26:13.265 [2024-12-09 05:20:49.688977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.265 [2024-12-09 05:20:49.689013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.689283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.689306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.689448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.689467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.689734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.689752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.689970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.689988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.690139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.690156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.690402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.690417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.690595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.690614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.690851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.690868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.691116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.691135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.691315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.691332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.691456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.691474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.691645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.691662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.691841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.691857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.692028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.692045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.692137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.692151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.692375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.692391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.692640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.692655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.692900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.692918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.693160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.693177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.693342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.693360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.693556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.693573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.693745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.693762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.694013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.694031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.694210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.694226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.694397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.694414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.694582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.694598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.694843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.694860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.695061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.695078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.695194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.695210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.695381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.695396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.695589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.695605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.695700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.695714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.695885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.695903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.696139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.696158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.696271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.696285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.696459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.696476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.696725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.266 [2024-12-09 05:20:49.696741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.266 qpair failed and we were unable to recover it. 00:26:13.266 [2024-12-09 05:20:49.697009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.697025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.697132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.697146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.697334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.697353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.697437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.697451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.697613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.697629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.697821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.697837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.697949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.697962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.698155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.698175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.698356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.698374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.698543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.698559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.698673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.698688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.698821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.698840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.698936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.698953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.699087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.699104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.699217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.699234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.699352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.699371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.699468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.699483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.699592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.699607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.699792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.699809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.699934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.699954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.700060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.700077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.700197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.700215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.700396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.700415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.700673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.700691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.700831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.700848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.700967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.700986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.701099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.701115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.701218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.701236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.701490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.701510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.701631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.701649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.701763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.701780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.701969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.701987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.702180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.702200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.702387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.267 [2024-12-09 05:20:49.702404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.267 qpair failed and we were unable to recover it. 00:26:13.267 [2024-12-09 05:20:49.702597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.702616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.702715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.702731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.702828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.702848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.703103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.703122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.703320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.703336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.703529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.703547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.703711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.703728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.703848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.703866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.703982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.704010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.704128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.704145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.704253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.704272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.704390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.704409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.704570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.704590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.704718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.704735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.704844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.704864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.704974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.704992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.705116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.705135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.705319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.705337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.705522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.705540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.705659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.705678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.705796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.705815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.705913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.705930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.706050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.706070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.706180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.706197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.706451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.706470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.706635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.706654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.706749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.706773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.706945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.706965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.707085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.707112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.707264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.707283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.707384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.707402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.707499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.707516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.707718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.707737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.707845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.707864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.708075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.268 [2024-12-09 05:20:49.708094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.268 qpair failed and we were unable to recover it. 00:26:13.268 [2024-12-09 05:20:49.708250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.708271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.708450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.708468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.708654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.708672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.708875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.708893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.709077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.709097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.709265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.709282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.709511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.709530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.709848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.709865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.709980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.710016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.710207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.710227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.710431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.710448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.710695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.710714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.710895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.710912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.711111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.711132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.711247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.711266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.711442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.711458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.711570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.711589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.711707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.711724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.712022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.712040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.712273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.712292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.712473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.712490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.712702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.712719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.712880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.712897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.713085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.713102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.713336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.269 [2024-12-09 05:20:49.713353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.269 qpair failed and we were unable to recover it. 00:26:13.269 [2024-12-09 05:20:49.713486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.713505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.713702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.713719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.713950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.713967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.714160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.714179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.714366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.714384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.714609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.714652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.714852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.714874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.715038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.715058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.715188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.715207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.715318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.715336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.715546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.715567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.715672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.715690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.715817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.715834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.716014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.716033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.716119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.716136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.716262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.716280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.716556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.716573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.716743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.716761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.716941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.716960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.717167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.717189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.717323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.717340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.717542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.717561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.717690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.717707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.717886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.717903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.718075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.270 [2024-12-09 05:20:49.718092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.270 qpair failed and we were unable to recover it. 00:26:13.270 [2024-12-09 05:20:49.718275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.718292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.718524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.718541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.718756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.718773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.718931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.718952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.719209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.719227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.719431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.719450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.719613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.719629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.719828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.719850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.720138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.720155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.720334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.720352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.720530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.720548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.720680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.720699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.720887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.720905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.721180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.721199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.721359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.721376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.721618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.721636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.721806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.721824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.722007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.722026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.722224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.722242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.722468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.722486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.722655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.722672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.722903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.722921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.723132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.723150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.723414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.723432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.723548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.723565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.271 qpair failed and we were unable to recover it. 00:26:13.271 [2024-12-09 05:20:49.723761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.271 [2024-12-09 05:20:49.723778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.724010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.724027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.724187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.724204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.724455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.724471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.724598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.724615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.724741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.724759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.724934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.724951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.725191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.725208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.725436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.725454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.725634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.725651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.725827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.725846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.726032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.726051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.726230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.726249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.726413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.726430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.726658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.726676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.726843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.726861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.727102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.727120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.727363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.727379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.727550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.727570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.727693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.727712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.727838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.727856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.728049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.728070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.728300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.728317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.728545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.728563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.728742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.272 [2024-12-09 05:20:49.728759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.272 qpair failed and we were unable to recover it. 00:26:13.272 [2024-12-09 05:20:49.728945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.728962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.273 qpair failed and we were unable to recover it. 00:26:13.273 [2024-12-09 05:20:49.729196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.729214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.273 qpair failed and we were unable to recover it. 00:26:13.273 [2024-12-09 05:20:49.729414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.729434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.273 qpair failed and we were unable to recover it. 00:26:13.273 [2024-12-09 05:20:49.729564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.729584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.273 qpair failed and we were unable to recover it. 00:26:13.273 [2024-12-09 05:20:49.729862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.729879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.273 qpair failed and we were unable to recover it. 00:26:13.273 [2024-12-09 05:20:49.730055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.730073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.273 qpair failed and we were unable to recover it. 00:26:13.273 [2024-12-09 05:20:49.730259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.730276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.273 qpair failed and we were unable to recover it. 00:26:13.273 [2024-12-09 05:20:49.730386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.730405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.273 qpair failed and we were unable to recover it. 00:26:13.273 [2024-12-09 05:20:49.730594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.730612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.273 qpair failed and we were unable to recover it. 00:26:13.273 [2024-12-09 05:20:49.730815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.730832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.273 qpair failed and we were unable to recover it. 00:26:13.273 [2024-12-09 05:20:49.730944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.730959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.273 qpair failed and we were unable to recover it. 00:26:13.273 [2024-12-09 05:20:49.731119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.731137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.273 qpair failed and we were unable to recover it. 00:26:13.273 [2024-12-09 05:20:49.731397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.731414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.273 qpair failed and we were unable to recover it. 00:26:13.273 [2024-12-09 05:20:49.731610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.731628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.273 qpair failed and we were unable to recover it. 00:26:13.273 [2024-12-09 05:20:49.731876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.731892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.273 qpair failed and we were unable to recover it. 00:26:13.273 [2024-12-09 05:20:49.732125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.732142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.273 qpair failed and we were unable to recover it. 00:26:13.273 [2024-12-09 05:20:49.732249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.732266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.273 qpair failed and we were unable to recover it. 00:26:13.273 [2024-12-09 05:20:49.732521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.732540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.273 qpair failed and we were unable to recover it. 00:26:13.273 [2024-12-09 05:20:49.732708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.732724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.273 qpair failed and we were unable to recover it. 00:26:13.273 [2024-12-09 05:20:49.732889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.732906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.273 qpair failed and we were unable to recover it. 00:26:13.273 [2024-12-09 05:20:49.733025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.733046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.273 qpair failed and we were unable to recover it. 00:26:13.273 [2024-12-09 05:20:49.733216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.273 [2024-12-09 05:20:49.733232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.733410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.733428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.733608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.733626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.733870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.733888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.734137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.734155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.734264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.734281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.734445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.734462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.734571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.734586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.734774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.734793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.734958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.734976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.735106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.735123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.735250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.735268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.735432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.735453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.735701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.735719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.735944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.735962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.736081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.736101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.736290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.736307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.736485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.736502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.736677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.736693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.736862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.736879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.737155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.737172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.737423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.737440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.737544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.737562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.737732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.737749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.737866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.737882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.738084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.738103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.274 qpair failed and we were unable to recover it. 00:26:13.274 [2024-12-09 05:20:49.738280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.274 [2024-12-09 05:20:49.738297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.738556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.738573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.738772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.738789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.738945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.738963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.739132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.739149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.739286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.739303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.739535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.739553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.739818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.739835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.740075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.740092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.740319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.740338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.740519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.740536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.740763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.740781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.741011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.741028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.741193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.741233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.741496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.741527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.741691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.741705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.741920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.741934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.742176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.742191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.742279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.742290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.742438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.742450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.742667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.742680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.742921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.742934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.743157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.743171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.743340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.743352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.743520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.743533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.743794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.743808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.743914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.743932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.744104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.744120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.744285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.744299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.744493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.744506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.744671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.744684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.744851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.744864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.745120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.745135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.275 [2024-12-09 05:20:49.745291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.275 [2024-12-09 05:20:49.745304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.275 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.745539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.745554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.745746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.745761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.745935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.745949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.746141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.746155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.746346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.746359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.746530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.746544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.746763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.746776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.747009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.747024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.747203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.747217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.747477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.747490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.747587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.747600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.747732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.747745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.747843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.747854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.747938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.747952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.748054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.748066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.748233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.748247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.748338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.748349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.748453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.748465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.748617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.748631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.748836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.748860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.749048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.749068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.749263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.749282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.749480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.749497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.749716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.749734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.749982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.750007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.750271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.750287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.750438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.750451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.750627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.750640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.750764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.750777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.751034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.751048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.751267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.751279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.751492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.751506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.751639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.751652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.751880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.751893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.751981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.751992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.752209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.752224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.752319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.752331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.276 qpair failed and we were unable to recover it. 00:26:13.276 [2024-12-09 05:20:49.752489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.276 [2024-12-09 05:20:49.752502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.752658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.752671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.752824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.752837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.752909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.752921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.753090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.753103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.753253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.753266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.753331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.753343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.753489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.753503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.753677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.753690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.753908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.753920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.754157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.754171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.754338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.754353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.754610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.754623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.754711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.754722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.754884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.754897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.755011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.755025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.755266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.755281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.755376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.755389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.755539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.755554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.755724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.755738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.755844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.755857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.756085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.756099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.756276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.756290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.756461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.756475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.756622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.756635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.756847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.756860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.757096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.757111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.757209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.757221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.757437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.757450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.757543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.757554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.757769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.757781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.757964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.757977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.758135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.758149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.758364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.758377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.758609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.758623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.758875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.758889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.759057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.759070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.759247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.277 [2024-12-09 05:20:49.759259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.277 qpair failed and we were unable to recover it. 00:26:13.277 [2024-12-09 05:20:49.759432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.759446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.759620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.759634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.759731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.759744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.759996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.760015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.760251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.760265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.760448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.760461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.760647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.760660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.760893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.760907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.761051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.761066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.761228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.761241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.761418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.761432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.761687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.761701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.761982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.761994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.762161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.762174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.762405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.762417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.762648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.762661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.762912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.762926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.763078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.763090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.763243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.763256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.763483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.763495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.763738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.763751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.763835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.763846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.764041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.764054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.764201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.764213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.764457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.764472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.764632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.764645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.764857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.764870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.764957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.764968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.765066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.765077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.765252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.765265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.765500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.765512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.765759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.765772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.765930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.278 [2024-12-09 05:20:49.765942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.278 qpair failed and we were unable to recover it. 00:26:13.278 [2024-12-09 05:20:49.766189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.766203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.766415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.766428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.766607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.766619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.766791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.766804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.766909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.766921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.767085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.767098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.767335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.767348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.767609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.767622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.767714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.767725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.767870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.767882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.768030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.768043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.768136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.768147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.768379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.768393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.768630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.768643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.768860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.768872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.769129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.769158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.769395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.769408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.769667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.769679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.769895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.769908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.770057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.770070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.770180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.770192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.770341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.770353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.770565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.770578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.770728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.770741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.770902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.770915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.771149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.771163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.771327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.771340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.771569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.771581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.771806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.771819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.772079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.772093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.772275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.772288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.772472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.772488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.772774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.772788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.773031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.773045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.773257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.773270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.773525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.773539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.773703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.773716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.279 [2024-12-09 05:20:49.773950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.279 [2024-12-09 05:20:49.773963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.279 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.774186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.774199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.774367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.774380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.774557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.774570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.774718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.774730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.774882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.774894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.775137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.775150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.775302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.775315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.775542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.775556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.775702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.775714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.775942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.775955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.776180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.776194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.776452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.776465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.776692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.776705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.776918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.776931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.777140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.777153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.777341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.777354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.777511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.777523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.777596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.777607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.777754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.777767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.777850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.777861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.778074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.778087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.778318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.778331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.778424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.778435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.778669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.778681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.778914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.778927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.779178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.779192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.779395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.779408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.779555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.779568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.779739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.779751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.779964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.779976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.780160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.780173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.780320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.780333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.780503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.780515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.780785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.780802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.781029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.781042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.781279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.781292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.781446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.781459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.280 qpair failed and we were unable to recover it. 00:26:13.280 [2024-12-09 05:20:49.781623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.280 [2024-12-09 05:20:49.781635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.781783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.781796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.782008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.782022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.782116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.782126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.782367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.782380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.782536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.782549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.782726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.782739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.782949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.782962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.783045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.783057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.783233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.783246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.783493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.783507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.783611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.783623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.783836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.783850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.784007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.784021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.784257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.784269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.784423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.784437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.784648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.784661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.784869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.784882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.785068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.785082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.785159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.785171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.785401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.785413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.785577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.785590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.785667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.785677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.785888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.785902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.786149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.786162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.786373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.786385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.786481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.786493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.786640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.786653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.786821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.786834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.786992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.787010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.787162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.787175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.787392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.787405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.787619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.787632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.787877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.787890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.788060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.788074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.788180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.788191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.788377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.788392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.788603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.281 [2024-12-09 05:20:49.788615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.281 qpair failed and we were unable to recover it. 00:26:13.281 [2024-12-09 05:20:49.788842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.788854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.789014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.789028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.789202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.789215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.789364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.789376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.789606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.789619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.789831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.789843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.790112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.790125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.790288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.790300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.790470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.790483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.790755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.790767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.790980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.790992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.791177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.791190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.791335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.791347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.791508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.791520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.791662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.791674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.791836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.791848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.792017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.792031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.792287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.792300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.792510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.792523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.792704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.792716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.792862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.792875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.793085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.793099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.793256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.793269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.793478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.793491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.793731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.793743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.793979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.794002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.794221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.794238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.794353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.794369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.794558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.794575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.794812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.794828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.795075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.795092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.795313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.795331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.795597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.795614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.795780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.795796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.796058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.796075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.796323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.796339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.796492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.796507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.796750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.282 [2024-12-09 05:20:49.796766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.282 qpair failed and we were unable to recover it. 00:26:13.282 [2024-12-09 05:20:49.796917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.796936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.797046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.797061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.797231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.797247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.797463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.797479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.797682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.797698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.797911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.797927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.798081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.798098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.798267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.798283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.798396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.798412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.798500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.798514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.798711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.798727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.798944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.798960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.799202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.799219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.799442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.799459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.799700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.799726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.799943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.799959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.800121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.800137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.800358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.800374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.800613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.800629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.800789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.800805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.801048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.801064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.801328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.801344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.801615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.801631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.801877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.801893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.802064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.802081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.802240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.802256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.802428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.802444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.802727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.802748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.802916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.802934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.803172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.803190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.803419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.803435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.803651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.803667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.803840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.803856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.804025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.283 [2024-12-09 05:20:49.804043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.283 qpair failed and we were unable to recover it. 00:26:13.283 [2024-12-09 05:20:49.804296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.804312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.804499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.804515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.804755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.804771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.805014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.805031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.805219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.805235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.805523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.805539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.805707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.805724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.805891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.805908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.806072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.806090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.806334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.806351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.806507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.806524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.806682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.806698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.806953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.806970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.807215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.807232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.807402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.807418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.807659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.807676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.807917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.807934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.808178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.808195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.808361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.808378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.808619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.808635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.808879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.808899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.809119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.809136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.809288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.809305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.809536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.809552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.809723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.809740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.809993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.810014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.810174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.810191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.810365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.810382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.810631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.810647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.810732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.810748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.810870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.810886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.811061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.811078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.811297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.811313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.811491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.811507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.811621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.811637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.811808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.811826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.811989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.812010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.812259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.284 [2024-12-09 05:20:49.812276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.284 qpair failed and we were unable to recover it. 00:26:13.284 [2024-12-09 05:20:49.812454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.812470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.812652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.812669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.812856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.812872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.813095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.813112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.813283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.813300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.813559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.813575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.813791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.813808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.813974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.813990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.814179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.814196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.814356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.814378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.814530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.814546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.814715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.814732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.814978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.814994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.815164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.815180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.815331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.815347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.815516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.815532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.815684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.815701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.815891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.815907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.816159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.816176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.816364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.816380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.816619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.816635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.816796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.816813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.816964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.816980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.817158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.817175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.817412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.817424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.817603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.817615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.817887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.817899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.818153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.818166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.818281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.818292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.818448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.818461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.818677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.818690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.818918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.818931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.819095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.819109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.819199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.819210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.819396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.819408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.819562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.819575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.819732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.819747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.819968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.819981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.285 qpair failed and we were unable to recover it. 00:26:13.285 [2024-12-09 05:20:49.820205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.285 [2024-12-09 05:20:49.820218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.820389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.820401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.820610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.820622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.820831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.820844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.821032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.821045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.821254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.821267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.821422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.821435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.821641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.821653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.821841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.821853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.821960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.821973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.822119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.822133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.822339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.822352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.822596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.822608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.822833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.822846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.823061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.823074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.823306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.823318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.823486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.823498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.823684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.823698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.823929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.823941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.824195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.824208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.824445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.824457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.824571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.824581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.824794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.824806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.824965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.824977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.825085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.825097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.825332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.825345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.825506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.825518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.825672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.825685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.825776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.825788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.826019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.826032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.826202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.826214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.826374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.826386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.826562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.826575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.826838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.826850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.827014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.827026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.827247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.827260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.827483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.827495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.827643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.286 [2024-12-09 05:20:49.827656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.286 qpair failed and we were unable to recover it. 00:26:13.286 [2024-12-09 05:20:49.827801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.827816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.827967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.827979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.828190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.828203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.828364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.828377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.828605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.828618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.828714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.828726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.828867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.828879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.829035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.829048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.829189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.829202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.829411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.829424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.829631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.829644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.829820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.829832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.830050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.830063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.830203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.830216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.830450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.830463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.830615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.830628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.830828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.830840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.831022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.831035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.831269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.831282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.831519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.831531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.831744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.831757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.831939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.831951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.832096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.832109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.832268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.832280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.832509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.832521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.832735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.832748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.832982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.832994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.833233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.833246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.833411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.833424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.833657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.833670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.833769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.833780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.834027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.834041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.834200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.834212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.834365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.287 [2024-12-09 05:20:49.834377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.287 qpair failed and we were unable to recover it. 00:26:13.287 [2024-12-09 05:20:49.834473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.834484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.834578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.834589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.834796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.834809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.834885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.834896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.835109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.835122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.835208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.835219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.835477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.835492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.835730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.835742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.835982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.835994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.836207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.836220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.836476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.836489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.836648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.836662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.836897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.836910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.837156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.837168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.837378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.837390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.837664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.837676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.837770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.837781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.838023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.838036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.838195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.838208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.838318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.838328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.838538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.838550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.838787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.838799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.838950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.838963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.839205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.839218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.839455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.839469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.839650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.839662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.839892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.839905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.840079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.840093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.840312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.840324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.840574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.840587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.840746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.840759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.840917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.840929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.841093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.841106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.841367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.841379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.841637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.841650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.841860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.841872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.842116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.842130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.288 [2024-12-09 05:20:49.842349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.288 [2024-12-09 05:20:49.842362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.288 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.842457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.842468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.842659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.842671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.842901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.842913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.843053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.843066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.843249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.843262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.843491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.843504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.843659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.843671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.843815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.843827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.843974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.843992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.844161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.844173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.844388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.844400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.844543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.844555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.844642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.844653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.844914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.844927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.845151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.845164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.845267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.845278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.845375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.845386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.845663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.845675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.845909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.845921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.846153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.846165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.846384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.846397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.846580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.846592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.846808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.846821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.846980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.846993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.847154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.847167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.847328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.847341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.847501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.847514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.847668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.847680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.847828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.847840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.848016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.848029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.848207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.848219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.848394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.848406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.848520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.848532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.848719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.848731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.848817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.848829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.848931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.848942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.849108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.289 [2024-12-09 05:20:49.849121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.289 qpair failed and we were unable to recover it. 00:26:13.289 [2024-12-09 05:20:49.849261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.849275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.849486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.849499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.849712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.849725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.849871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.849883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.850050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.850064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.850299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.850312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.850470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.850482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.850579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.850590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.850751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.850765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.850882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.850895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.851129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.851143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.851371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.851386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.851546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.851559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.851729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.851742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.852004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.852019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.852215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.852228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.852500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.852514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.852662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.852675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.852835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.852847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.853023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.853036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.853179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.853193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.853423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.853436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.853523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.853534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.853719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.853732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.853886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.853898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.854131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.854144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.854347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.854359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.854569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.854582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.854832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.854844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.854987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.855012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.855223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.855237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.855394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.855406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.855501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.855514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.855751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.855763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.855940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.855954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.856097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.856111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.856194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.856206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.856363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.290 [2024-12-09 05:20:49.856376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.290 qpair failed and we were unable to recover it. 00:26:13.290 [2024-12-09 05:20:49.856528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.856541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.856725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.856737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.857004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.857017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.857179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.857192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.857338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.857351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.857494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.857508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.857725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.857738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.857893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.857906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.858106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.858120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.858339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.858352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.858510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.858524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.858734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.858748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.858932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.858945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.859138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.859154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.859312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.859325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.859477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.859489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.859718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.859732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.859902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.859916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.860079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.860092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.860268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.860281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.860438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.860451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.860711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.860724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.860913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.860926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.861022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.861035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.861191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.861204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.861297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.861311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.861419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.861433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.861654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.861668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.861767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.861780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.861959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.861972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.862185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.862198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.862409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.862423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.862657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.862670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.862752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.862765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.862919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.862932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.863167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.863182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.863386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.863400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.863559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.291 [2024-12-09 05:20:49.863573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.291 qpair failed and we were unable to recover it. 00:26:13.291 [2024-12-09 05:20:49.863666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.863679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.863910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.863923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.864030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.864045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.864247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.864260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.864404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.864417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.864675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.864689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.864855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.864869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.864951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.864964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.865164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.865178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.865400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.865414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.865632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.865645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.865822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.865836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.866065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.866079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.866288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.866302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.866515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.866527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.866771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.866785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.866955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.866967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.867199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.867213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.867324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.867337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.867498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.867511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.867605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.867618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.867839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.867851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.868086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.868100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.868331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.868344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.868519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.868533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.868627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.868640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.868875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.868888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.869069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.869083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.869330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.869344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.869580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.869593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.869816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.869829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.870053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.870068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.870221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.870234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.870380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.292 [2024-12-09 05:20:49.870393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.292 qpair failed and we were unable to recover it. 00:26:13.292 [2024-12-09 05:20:49.870571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.293 [2024-12-09 05:20:49.870584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.293 qpair failed and we were unable to recover it. 00:26:13.293 [2024-12-09 05:20:49.870731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.293 [2024-12-09 05:20:49.870743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.293 qpair failed and we were unable to recover it. 00:26:13.293 [2024-12-09 05:20:49.870840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.293 [2024-12-09 05:20:49.870854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.293 qpair failed and we were unable to recover it. 00:26:13.293 [2024-12-09 05:20:49.870958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.293 [2024-12-09 05:20:49.870970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.293 qpair failed and we were unable to recover it. 00:26:13.293 [2024-12-09 05:20:49.871205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.293 [2024-12-09 05:20:49.871219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.293 qpair failed and we were unable to recover it. 00:26:13.293 [2024-12-09 05:20:49.871309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.293 [2024-12-09 05:20:49.871323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.293 qpair failed and we were unable to recover it. 00:26:13.293 [2024-12-09 05:20:49.871502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.293 [2024-12-09 05:20:49.871515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.293 qpair failed and we were unable to recover it. 00:26:13.293 [2024-12-09 05:20:49.871608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.293 [2024-12-09 05:20:49.871621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.293 qpair failed and we were unable to recover it. 00:26:13.293 [2024-12-09 05:20:49.871881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.293 [2024-12-09 05:20:49.871908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.293 qpair failed and we were unable to recover it. 00:26:13.293 [2024-12-09 05:20:49.872085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.293 [2024-12-09 05:20:49.872103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.293 qpair failed and we were unable to recover it. 00:26:13.293 [2024-12-09 05:20:49.872267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.293 [2024-12-09 05:20:49.872283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.293 qpair failed and we were unable to recover it. 00:26:13.599 [2024-12-09 05:20:49.872537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.599 [2024-12-09 05:20:49.872554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.599 qpair failed and we were unable to recover it. 00:26:13.599 [2024-12-09 05:20:49.872805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.599 [2024-12-09 05:20:49.872822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.599 qpair failed and we were unable to recover it. 00:26:13.599 [2024-12-09 05:20:49.872991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.599 [2024-12-09 05:20:49.873012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.599 qpair failed and we were unable to recover it. 00:26:13.599 [2024-12-09 05:20:49.873286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.599 [2024-12-09 05:20:49.873302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.599 qpair failed and we were unable to recover it. 00:26:13.599 [2024-12-09 05:20:49.873487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.599 [2024-12-09 05:20:49.873504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.599 qpair failed and we were unable to recover it. 00:26:13.599 [2024-12-09 05:20:49.873644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.599 [2024-12-09 05:20:49.873660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.599 qpair failed and we were unable to recover it. 00:26:13.599 [2024-12-09 05:20:49.873851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.873868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.874053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.874070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.874181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.874197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.874361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.874377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.874567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.874587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.874825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.874843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.875020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.875037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.875164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.875181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.875354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.875371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.875542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.875560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.875743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.875763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.875936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.875953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.876095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.876113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.876221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.876238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.876494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.876510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.876683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.876701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.876925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.876942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.877176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.877193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.877362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.877379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.877564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.877582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.877752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.877769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.878039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.878059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.878333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.878347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.878581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.878595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.878835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.878848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.878932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.878945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.879182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.879195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.879476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.879489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.879643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.879656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.879873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.879886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.880049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.880063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.880180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.880206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.880380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.880399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.880641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.880657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.880833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.880850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.880969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.880986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.881180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.881197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.600 [2024-12-09 05:20:49.881457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.600 [2024-12-09 05:20:49.881474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.600 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.881742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.881760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.881913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.881931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.882092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.882110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.882338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.882361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.882528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.882544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.882658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.882674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.882841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.882865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.883130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.883149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.883328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.883347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.883582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.883600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.883849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.883869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.884031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.884056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.884279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.884298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.884457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.884474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.884639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.884658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.884839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.884855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.885102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.885119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.885286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.885303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.885475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.885492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.885582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.885603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.885797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.885814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.886057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.886076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.886259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.886277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.886456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.886473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.886727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.886745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.886991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.887015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.887119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.887136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.887371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.887388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.887616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.887632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.887811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.887829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.888035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.888054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.888275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.888292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.888505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.888531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.888817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.888843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.888960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.888977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.889090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.889107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.889221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.889238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.601 qpair failed and we were unable to recover it. 00:26:13.601 [2024-12-09 05:20:49.889481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.601 [2024-12-09 05:20:49.889498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.889679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.889696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.889892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.889909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.890175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.890190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.890348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.890363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.890572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.890587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.890739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.890751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.890837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.890850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.891062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.891076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.891229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.891242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.891334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.891348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.891506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.891519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.891658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.891672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.891816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.891830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.892013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.892026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.892291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.892304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.892446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.892458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.892618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.892632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.892818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.892832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.892985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.893002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.893101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.893113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.893229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.893241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.893327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.893340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.893498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.893511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.893623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.893636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.893892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.893905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.894080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.894094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.894210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.894222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.894368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.894380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.894567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.894581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.894844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.894858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.895118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.895133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.895338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.895352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.895535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.895548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.895701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.895714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.895913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.895926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.896125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.896140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.896250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.602 [2024-12-09 05:20:49.896263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.602 qpair failed and we were unable to recover it. 00:26:13.602 [2024-12-09 05:20:49.896350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.896363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.896501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.896513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.896690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.896703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.896916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.896929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.897085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.897100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.897265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.897278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.897423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.897436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.897626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.897640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.897849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.897863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.898072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.898085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.898322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.898336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.898426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.898441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.898534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.898547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.898695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.898708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.898868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.898881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.898981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.898995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.899152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.899166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.899347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.899361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.899512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.899525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.899605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.899618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.899732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.899746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.899832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.899845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.899994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.900016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.900108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.900122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.900357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.900369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.900589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.900601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.900759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.900772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.900936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.900949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.901036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.901050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.901219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.901231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.901326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.901342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.901494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.901507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.901646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.901660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.901765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.901777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.902013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.902028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.902110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.902124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.902285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.902298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.902458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.603 [2024-12-09 05:20:49.902476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.603 qpair failed and we were unable to recover it. 00:26:13.603 [2024-12-09 05:20:49.902688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.902703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.902867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.902880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.903133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.903149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.903358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.903371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.903450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.903463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.903717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.903731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.903874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.903887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.904037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.904051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.904153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.904166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.904261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.904274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.904432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.904445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.904588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.904602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.904834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.904847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.904996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.905014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.905187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.905199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.905357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.905370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.905577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.905590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.905762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.905775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.905859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.905872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.906135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.906150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.906363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.906377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.906515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.906528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.906709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.906721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.906865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.906878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.907045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.907061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.907270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.907283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.907459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.907473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.907573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.907586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.907745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.907758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.907968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.907982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.908073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.908086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.908186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.604 [2024-12-09 05:20:49.908199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.604 qpair failed and we were unable to recover it. 00:26:13.604 [2024-12-09 05:20:49.908364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.908376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.908465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.908479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.908624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.908638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.908729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.908743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.908936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.908950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.909116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.909132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.909242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.909256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.909350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.909362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.909523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.909539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.909700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.909712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.909798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.909811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.909954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.909967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.910220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.910233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.910417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.910430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.910667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.910680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.910919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.910933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.911032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.911046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.911272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.911286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.911453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.911466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.911673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.911686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.911874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.911887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.912079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.912093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.912332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.912346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.912456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.912469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.912635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.912648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.912818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.912831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.912974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.912988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.913218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.913231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.913389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.913402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.913501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.913514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.913679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.913693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.913861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.913874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.914020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.914033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.914190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.914204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.914413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.914426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.914589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.914602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.914755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.914770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.914910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.605 [2024-12-09 05:20:49.914925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.605 qpair failed and we were unable to recover it. 00:26:13.605 [2024-12-09 05:20:49.915163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.915176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.915338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.915351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.915442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.915455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.915549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.915562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.915642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.915654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.915885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.915897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.915987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.916003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.916219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.916232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.916326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.916339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.916592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.916604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.916697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.916712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.916915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.916928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.917157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.917170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.917405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.917418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.917579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.917591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.917824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.917836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.918010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.918023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.918206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.918219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.918374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.918386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.918529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.918541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.918759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.918772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.918963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.918976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.919145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.919158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.919420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.919433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.919650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.919662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.919822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.919834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.920067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.920080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.920250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.920262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.920407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.920420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.920562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.920575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.920800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.920813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.921031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.921044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.921256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.921269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.921428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.921440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.921619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.921632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.921738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.921751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.921891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.921903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.922046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.606 [2024-12-09 05:20:49.922059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.606 qpair failed and we were unable to recover it. 00:26:13.606 [2024-12-09 05:20:49.922136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.922149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.922359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.922371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.922603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.922615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.922841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.922854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.923031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.923043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.923201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.923214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.923445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.923457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.923676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.923688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.923922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.923935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.924087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.924100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.924320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.924334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.924497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.924509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.924691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.924706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.924802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.924815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.924977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.924989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.925184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.925197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.925461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.925474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.925715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.925727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.925883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.925896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.926131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.926144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.926313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.926326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.926563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.926576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.926739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.926751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.926913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.926925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.927139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.927152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.927382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.927395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.927550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.927563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.927779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.927792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.928025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.928037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.928293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.928305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.928414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.928427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.928568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.928581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.928725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.928738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.928949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.928962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.929188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.929201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.607 [2024-12-09 05:20:49.929415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.607 [2024-12-09 05:20:49.929427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.607 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.929653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.929665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.929808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.929820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.929991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.930010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.930177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.930189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.930415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.930428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.930636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.930648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.930731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.930744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.930954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.930967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.931063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.931076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.931283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.931296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.931385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.931398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.931648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.931660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.931825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.931838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.932064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.932078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.932187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.932199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.932348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.932361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.932571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.932586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.932822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.932834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.932922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.932935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.933168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.933182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.933414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.933427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.933599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.933612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.933866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.933879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.934040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.934053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.934286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.934299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.934487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.934501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.934610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.934623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.934853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.934866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.935060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.935074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.935302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.935314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.935550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.935562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.935819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.935831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.936067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.936082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.936233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.936246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.936476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.608 [2024-12-09 05:20:49.936489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.608 qpair failed and we were unable to recover it. 00:26:13.608 [2024-12-09 05:20:49.936745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.936758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.936994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.937012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.937176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.937189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.937423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.937435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.937591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.937603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.937779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.937792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.937897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.937909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.938139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.938153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.938300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.938313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.938409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.938422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.938516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.938530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.938742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.938755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.939034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.939048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.939262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.939275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.939456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.939467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.939544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.939557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.939697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.939710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.939794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.939806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.939959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.939972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.940181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.940193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.940349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.940362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.940522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.940538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.940689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.940702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.940911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.940924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.941022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.941035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.941203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.941215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.941405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.941418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.941659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.941672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.941881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.941894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.942051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.942064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.942149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.942164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.609 qpair failed and we were unable to recover it. 00:26:13.609 [2024-12-09 05:20:49.942414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.609 [2024-12-09 05:20:49.942426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.942581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.942594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.942698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.942711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.942928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.942941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.943125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.943138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.943230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.943243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.943389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.943402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.943640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.943671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.943791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.943805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.944017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.944030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.944132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.944146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.944295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.944308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.944463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.944476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.944691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.944705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.944959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.944975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.945077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.945090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.945264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.945277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.945509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.945522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.945624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.945637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.945869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.945882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.946050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.946063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.946142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.946155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.946371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.946384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.946540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.946553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.946647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.946660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.946912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.946925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.947169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.947182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.947419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.947433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.947672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.947685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.947855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.947868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.948021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.948037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.948264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.948277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.948446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.948460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.948600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.948613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.948822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.948836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.949053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.610 [2024-12-09 05:20:49.949067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.610 qpair failed and we were unable to recover it. 00:26:13.610 [2024-12-09 05:20:49.949233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.949245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.949402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.949415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.949642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.949654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.949877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.949891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.950154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.950167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.950328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.950341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.950550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.950563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.950791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.950804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.951088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.951102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.951316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.951329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.951539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.951551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.951733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.951745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.951845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.951858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.952037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.952051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.952224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.952237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.952444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.952458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.952661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.952673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.952884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.952898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.953129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.953143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.953252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.953266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.953502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.953515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.953679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.953693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.953919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.953931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.954108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.954121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.954276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.954287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.954374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.954387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.954483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.954496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.954723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.954735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.955015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.955029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.955183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.955196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.955374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.955387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.955660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.955672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.955908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.955920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.956078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.956091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.956312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.956328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.956497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.956511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.956616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.611 [2024-12-09 05:20:49.956628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.611 qpair failed and we were unable to recover it. 00:26:13.611 [2024-12-09 05:20:49.956771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.956783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.956937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.956950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.957093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.957117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.957281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.957293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.957447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.957460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.957625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.957637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.957797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.957810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.957952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.957964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.958153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.958166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.958328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.958340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.958442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.958455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.958551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.958563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.958743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.958755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.958920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.958932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.959145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.959159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.959384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.959396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.959485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.959497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.959704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.959716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.959923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.959935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.960093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.960106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.960325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.960337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.960564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.960576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.960733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.960746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.960907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.960920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.961087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.961115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.961214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.961231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.961476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.961493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.961658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.961674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.961918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.961934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.962093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.962110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.962304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.962321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.962494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.962510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.962682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.962700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.962832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.962849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.963101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.963118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.963335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.963351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.963514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.612 [2024-12-09 05:20:49.963531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.612 qpair failed and we were unable to recover it. 00:26:13.612 [2024-12-09 05:20:49.963728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.963748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.963941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.963958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.964133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.964150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.964369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.964385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.964558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.964575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.964827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.964843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.964952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.964969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.965155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.965172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.965344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.965360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.965533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.965549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.965801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.965816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.965982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.966004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.966141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.966157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.966342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.966358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.966551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.966568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.966754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.966770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.967017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.967034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.967253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.967269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.967386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.967402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.967646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.967662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.967892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.967909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.968072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.968089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.968325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.968341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.968446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.968462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.968701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.968717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.968902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.968918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.969123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.969139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.969360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.969375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.969539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.969552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.969736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.969749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.969913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.969926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.970016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.613 [2024-12-09 05:20:49.970028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.613 qpair failed and we were unable to recover it. 00:26:13.613 [2024-12-09 05:20:49.970134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.970147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.970307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.970320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.970406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.970418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.970585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.970598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.970771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.970784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.970991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.971008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.971174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.971187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.971342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.971355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.971533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.971548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.971706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.971719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.971883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.971895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.972079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.972092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.972256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.972269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.972419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.972430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.972526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.972538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.972693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.972706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.972933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.972945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.973105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.973118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.973231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.973244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.973396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.973410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.973501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.973513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.973610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.973622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.973776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.973788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.973932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.973945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.974098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.974111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.974206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.974218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.974328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.974340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.974444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.974456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.974634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.974647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.974787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.974799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.974888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.974901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.975008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.975022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.975125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.975138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.975226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.975239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.975340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.975353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.975466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.975485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.975649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.975666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.975755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.614 [2024-12-09 05:20:49.975771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.614 qpair failed and we were unable to recover it. 00:26:13.614 [2024-12-09 05:20:49.975927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.975943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.976165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.976182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.976288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.976305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.976406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.976422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.976542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.976558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.976651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.976667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.976936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.976953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.977112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.977128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.977282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.977299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.977517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.977533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.977655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.977671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.977835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.977852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.978026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.978042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.978205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.978222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.978335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.978351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.978515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.978530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.978706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.978721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.978892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.978908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.979070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.979088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.979254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.979270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.979467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.979484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.979728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.979745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.979914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.979930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.980169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.980186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.980310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.980326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.980438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.980454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.980692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.980709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.980895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.980912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.981077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.981094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.981205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.981222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.981404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.981420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.981525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.981541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.981654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.981670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.981843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.981859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.982010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.982027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.982113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.982129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.982310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.615 [2024-12-09 05:20:49.982327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.615 qpair failed and we were unable to recover it. 00:26:13.615 [2024-12-09 05:20:49.982549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.982571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.982838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.982854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.983091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.983108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.983227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.983243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.983408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.983424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.983515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.983531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.983722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.983738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.983907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.983923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.984171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.984192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.984425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.984441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.984674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.984689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.984960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.984977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.985111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.985128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.985228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.985244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.985397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.985413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.985528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.985544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.985732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.985748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.986006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.986024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.986189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.986207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.986331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.986348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.986518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.986535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.986795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.986812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.986995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.987017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.987156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.987173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.987357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.987374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.987483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.987500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.987699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.987716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.987870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.987888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.988049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.988066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.988164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.988180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.988375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.988392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.988550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.988566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.988757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.988774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.988923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.988940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.989161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.989178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.989297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.989313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.989494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.616 [2024-12-09 05:20:49.989510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.616 qpair failed and we were unable to recover it. 00:26:13.616 [2024-12-09 05:20:49.989668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.989684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.989786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.989804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.990011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.990028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.990157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.990176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.990418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.990434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.990742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.990758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.990919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.990936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.991108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.991125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.991235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.991251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.991433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.991450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.991671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.991687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.991953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.991969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.992086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.992104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.992293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.992311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.992412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.992430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.992672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.992689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.992821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.992837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.993079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.993095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.993277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.993297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.993465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.993484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.993651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.993670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.993915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.993931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.994116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.994132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.994381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.994397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.994584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.994603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.994780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.994799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.994969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.994987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.995251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.995270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.995443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.995461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.995650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.995667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.995844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.995862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.996125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.996143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.996322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.996338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.996513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.996529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.996683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.996699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.996802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.996819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.617 qpair failed and we were unable to recover it. 00:26:13.617 [2024-12-09 05:20:49.996930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.617 [2024-12-09 05:20:49.996946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:49.997166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:49.997183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:49.997409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:49.997426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:49.997539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:49.997555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:49.997766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:49.997781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:49.997887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:49.997903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:49.998010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:49.998027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:49.998118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:49.998137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:49.998247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:49.998264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:49.998376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:49.998392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:49.998510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:49.998526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:49.998676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:49.998692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:49.998800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:49.998816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:49.998905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:49.998921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:49.999085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:49.999102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:49.999323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:49.999340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:49.999527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:49.999544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:49.999622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:49.999638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:49.999795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:49.999811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:49.999914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:49.999930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:50.000041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:50.000057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:50.000231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:50.000248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:50.000442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:50.000458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:50.000622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:50.000639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:50.000738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:50.000754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:50.000856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:50.000873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:50.001037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:50.001055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:50.001163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:50.001179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:50.001363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:50.001379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:50.001488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:50.001504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:50.001646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:50.001661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:50.001757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:50.001773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:50.001875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:50.001891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:50.001988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.618 [2024-12-09 05:20:50.002009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.618 qpair failed and we were unable to recover it. 00:26:13.618 [2024-12-09 05:20:50.002117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.002134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.002315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.002331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.002428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.002444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.002539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.002556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.002656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.002672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.002780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.002797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.002901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.002917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.003022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.003039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.003170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.003186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.003261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.003278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.003353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.003369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.003447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.003463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.003557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.003574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.003739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.003758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.003860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.003876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.004029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.004046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.004153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.004169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.004283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.004299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.004386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.004403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.004567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.004583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.004670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.004687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.004791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.004808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.005011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.005028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.005122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.005138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.005224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.005241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.005345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.005361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.005518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.005534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.005716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.005732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.005984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.006012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.006229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.006246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.006420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.006436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.006531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.006547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.006807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.006824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.007082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.007098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.007200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.007216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.007450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.007466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.007584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.619 [2024-12-09 05:20:50.007600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.619 qpair failed and we were unable to recover it. 00:26:13.619 [2024-12-09 05:20:50.007861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.007878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.008157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.008174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.008346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.008362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.008541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.008558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.008787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.008803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.008971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.008987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.009195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.009212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.009342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.009359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.009467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.009483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.009585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.009601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.009705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.009720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.009871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.009888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.009989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.010012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.010131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.010147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.010253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.010270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.010473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.010490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.010665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.010684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.010840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.010856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.011052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.011070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.011230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.011246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.011362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.011378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.011472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.011490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.011685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.011701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.011870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.011887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.011985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.012007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.012212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.012228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.012471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.012488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.012711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.012727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.012909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.012926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.013112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.013128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.013315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.013331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.013439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.013456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.013607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.013623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.013732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.013747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.013973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.013990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.014170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.014187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.620 [2024-12-09 05:20:50.014288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.620 [2024-12-09 05:20:50.014304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.620 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.014476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.014492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.014596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.014612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.014714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.014730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.014835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.014851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.014949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.014966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.015068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.015085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.015194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.015210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.015319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.015335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.015438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.015456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.015539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.015555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.015701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.015717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.015835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.015850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.015956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.015973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.016125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.016142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.016319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.016335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.016428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.016443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.016553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.016569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.016726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.016742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.016887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.016903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.017106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.017129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.017251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.017268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.017363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.017380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.017490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.017506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.017757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.017786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.017824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8db20 (9): Bad file descriptor 00:26:13.621 [2024-12-09 05:20:50.019059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.019091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.019310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.019330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.019520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.019537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.019828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.019845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.020013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.020030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.020240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.020257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.020371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.020388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.020560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.020576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.020742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.020764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.020956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.020975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.021126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.021143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.621 [2024-12-09 05:20:50.021259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.621 [2024-12-09 05:20:50.021276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.621 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.021498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.021515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.021609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.021627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.021736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.021753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.021905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.021923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.022165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.022182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.022350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.022367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.022473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.022489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.022639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.022655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.022753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.022770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.022928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.022944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.023050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.023067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.023193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.023210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.023375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.023392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.023478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.023495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.023622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.023638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.023802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.023819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.023978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.023995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.024166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.024183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.024354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.024371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.024555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.024571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.024676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.024693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.024876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.024894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.025111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.025128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.025344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.025364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.025463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.025480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.025600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.025617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.025785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.025802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.026032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.026050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.026135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.026152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.026347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.026364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.026481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.026498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.026694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.026712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.026944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.026962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.027122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.027139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.027302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.027319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.027505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.027523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.027719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.622 [2024-12-09 05:20:50.027736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.622 qpair failed and we were unable to recover it. 00:26:13.622 [2024-12-09 05:20:50.027923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.027941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.028108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.028125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.028229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.028247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.028400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.028416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.028591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.028608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.028769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.028786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.028885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.028902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.029088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.029105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.029269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.029287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.029420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.029437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.029587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.029603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.029776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.029792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.029986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.030008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.030194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.030211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.030384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.030402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.030511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.030528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.030759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.030777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.030953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.030970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.031145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.031162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.031273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.031291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.031400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.031417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.031581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.031598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.031763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.031779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.031988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.032011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.032125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.032143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.032253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.032269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.032376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.032393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.032572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.032600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.032765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.032782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.032875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.032892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.033077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.033094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.033251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.033268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.033368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.033385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.623 [2024-12-09 05:20:50.033498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.623 [2024-12-09 05:20:50.033515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.623 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.033691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.033708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.033890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.033907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.034008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.034025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.034194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.034211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.034332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.034349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.034456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.034473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.034661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.034682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.034847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.034864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.035027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.035045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.035215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.035232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.035383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.035399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.035582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.035599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.035801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.035818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.036057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.036075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.036243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.036259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.036445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.036461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.036576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.036593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.036695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.036712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.036872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.036889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.037071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.037088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.037213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.037230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.037335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.037351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.037567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.037584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.037702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.037718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.037898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.037916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.038069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.038086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.038243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.038260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.038379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.038396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.038494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.038511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.038612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.038628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.038783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.038800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.038898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.038914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.039136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.039154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.039372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.039393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.039565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.039579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.039788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.039801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.039909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.624 [2024-12-09 05:20:50.039922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.624 qpair failed and we were unable to recover it. 00:26:13.624 [2024-12-09 05:20:50.040034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.040048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.040146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.040158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.040244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.040257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.040333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.040346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.040560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.040573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.040762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.040774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.040952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.040964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.041051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.041064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.041137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.041150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.041253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.041266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.041362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.041375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.042111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.042135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.042305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.042318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.042407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.042420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.042506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.042518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.042668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.042680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.042894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.042907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.043052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.043065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.043149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.043161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.043246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.043258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.043340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.043351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.043438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.043451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.043608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.043621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.043728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.043740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.043827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.043840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.043996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.044014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.044156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.044168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.044244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.044256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.044338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.044351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.044423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.044436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.044526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.044538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.044630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.044642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.044720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.044733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.044818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.044830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.044919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.044932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.045018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.045030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.045116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.625 [2024-12-09 05:20:50.045133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.625 qpair failed and we were unable to recover it. 00:26:13.625 [2024-12-09 05:20:50.045286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.045299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.045451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.045464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.045549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.045562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.045640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.045653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.045742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.045756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.045863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.045876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.045956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.045968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.046058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.046072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.046160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.046172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.046248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.046261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.046358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.046370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.046463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.046476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.046567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.046580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.046665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.046678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.046751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.046764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.046845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.046858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.046956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.046968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.047043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.047056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.047142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.047155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.047239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.047251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.047403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.047416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.047505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.047517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.047603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.047616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.047683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.047708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.047798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.047811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.047922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.047935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.048018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.048032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.048116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.048130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.048234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.048247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.048343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.048356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.048439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.048452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.048548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.048561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.048658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.048671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.048763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.048776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.048887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.048900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.049007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.049020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.049088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.049101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.626 [2024-12-09 05:20:50.049240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.626 [2024-12-09 05:20:50.049253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.626 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.049357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.049371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.049447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.049462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.049549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.049563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.049664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.049677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.049742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.049762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.050025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.050095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.050244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.050260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.050393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.050426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.050541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.050595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.050774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.050810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.050914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.050931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.051032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.051049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.051187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.051200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.051275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.051288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.051379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.051393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.051468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.051481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.051575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.051588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.051690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.051704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.051793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.051807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.051894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.051907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.051989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.052009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.052105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.052119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.052205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.052219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.052376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.052389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.052471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.052483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.052561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.052573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.052646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.052658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.052812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.052824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.052987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.053002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.053081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.053094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.053183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.053197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.053275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.053287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.053433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.053446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.627 [2024-12-09 05:20:50.053593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.627 [2024-12-09 05:20:50.053605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.627 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.053783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.053796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.053879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.053892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.053977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.053990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.054101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.054113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.054274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.054286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.054374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.054386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.054457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.054469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.054562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.054577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.054728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.054740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.054824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.054837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.054948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.054960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.055043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.055057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.055157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.055169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.055237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.055249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.055341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.055353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.055431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.055444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.055541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.055554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.055636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.055647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.055773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.055785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.055865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.055878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.055959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.055972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.056128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.056140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.056243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.056256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.056351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.056364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.056452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.056465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.056626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.056639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.056722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.056734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.056902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.056915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.057125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.057138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.057234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.057247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.057322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.057334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.057479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.057492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.057569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.057581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.057670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.057682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.057771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.057783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.057860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.057873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.057953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.628 [2024-12-09 05:20:50.057966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.628 qpair failed and we were unable to recover it. 00:26:13.628 [2024-12-09 05:20:50.058043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.058057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.058153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.058165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.058252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.058264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.058354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.058367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.058445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.058457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.058538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.058550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.058716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.058728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.058825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.058838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.058930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.058943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.059006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.059018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.059087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.059107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.059224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.059237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.059463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.059476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.059632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.059645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.059779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.059791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.059891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.059903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.060146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.060159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.060247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.060260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.060334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.060346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.060432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.060446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.060536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.060548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.060630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.060643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.060808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.060821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.060976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.060988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.061085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.061098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.061311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.061325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.061482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.061494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.061636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.061649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.061794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.061806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.061862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.061875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.061974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.061987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.062077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.062097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.062204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.062221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.062376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.062392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.062472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.062488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.062641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.062657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.062739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.062755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.062844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.629 [2024-12-09 05:20:50.062860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.629 qpair failed and we were unable to recover it. 00:26:13.629 [2024-12-09 05:20:50.062971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.062988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.063149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.063166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.063261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.063277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.063504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.063520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.063612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.063629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.063732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.063748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.063857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.063874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.064042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.064060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.064145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.064161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.064309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.064326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.064481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.064498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.064664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.064680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.064775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.064794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.064981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.065013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.065121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.065139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.065247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.065263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.065347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.065364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.065585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.065602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.065687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.065704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.065792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.065809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.065889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.065906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.065991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.066013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.066170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.066186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.066292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.066308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.066410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.066427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.066520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.066537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.066633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.066649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.066751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.066767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.066861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.066877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.066979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.066996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.067089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.067106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.067194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.067210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.067292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.067308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.067409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.067425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.067524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.067540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.067698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.067715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.067821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.067838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.067932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.630 [2024-12-09 05:20:50.067948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.630 qpair failed and we were unable to recover it. 00:26:13.630 [2024-12-09 05:20:50.068157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.068174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.068328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.068344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.068593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.068609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.068740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.068756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.068856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.068873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.069046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.069063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.069145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.069161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.069259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.069277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.069383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.069400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.069494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.069511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.069751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.069768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.069868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.069884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.070052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.070070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.070201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.070217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.070378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.070397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.070538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.070555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.070756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.070772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.070872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.070888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.070972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.070988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.071088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.071104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.071324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.071340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.071429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.071445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.071544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.071560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.071750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.071766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.071937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.071952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.072053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.072066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.072211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.072224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.072319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.072331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.072430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.072443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.072547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.072559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.072644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.072657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.072734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.072747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.072851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.072864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.073045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.073058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.073222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.073234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.073323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.073335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.073427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.073440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.631 [2024-12-09 05:20:50.073513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.631 [2024-12-09 05:20:50.073526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.631 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.073665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.073677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.073821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.073833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.074069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.074083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.074232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.074244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.074408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.074420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.074526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.074538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.074636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.074649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.074793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.074805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.074987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.075003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.075084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.075096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.075189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.075201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.075307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.075319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.075438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.075452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.075609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.075622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.075778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.075790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.075881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.075893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.075986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.076007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.076098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.076111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.076255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.076267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.076430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.076443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.076592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.076604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.076811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.076824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.076879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.076892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.077077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.077090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.077279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.077292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.077389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.077402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.077562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.077575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.077748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.077760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.077941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.077954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.078125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.078138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.078242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.078255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.078346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.078359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.078506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.078518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.632 qpair failed and we were unable to recover it. 00:26:13.632 [2024-12-09 05:20:50.078606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.632 [2024-12-09 05:20:50.078619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.078782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.078795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.078869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.078881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.078986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.079004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.079107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.079119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.079194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.079206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.079297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.079310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.079454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.079466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.079551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.079563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.079740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.079753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.079840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.079858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.079971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.079987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.080073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.080089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.080264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.080280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.080431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.080448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.080534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.080550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.080659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.080675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.080766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.080783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.080882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.080899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.081057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.081074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.081235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.081252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.081366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.081382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.081491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.081508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.081603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.081621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.081721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.081738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.081830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.081847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.081946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.081962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.082050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.082067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.082219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.082236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.082386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.082402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.082555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.082572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.082661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.082677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.082747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.082764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.082915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.082932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.083036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.083053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.083146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.083162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.083266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.083283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.083388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.083405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.633 qpair failed and we were unable to recover it. 00:26:13.633 [2024-12-09 05:20:50.083481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.633 [2024-12-09 05:20:50.083497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.083617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.083633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.083734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.083750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.083835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.083851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.084001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.084018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.084101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.084118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.084334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.084351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.084460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.084476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.084634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.084651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.084808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.084824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.084935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.084951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.085113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.085130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.085225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.085242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.085347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.085363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.085455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.085471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.085588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.085605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.085695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.085711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.085821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.085837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.085943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.085960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.086054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.086071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.086242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.086258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.086426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.086444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.086603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.086619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.086714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.086731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.086807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.086823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.086927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.086949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.087048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.087065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.087167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.087183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.087335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.087353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.087457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.087472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.087559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.087573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.087680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.087693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.087774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.087786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.087875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.087887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.087975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.087988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.088079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.088091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.088169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.088181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.088259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.088272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.634 [2024-12-09 05:20:50.088372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.634 [2024-12-09 05:20:50.088385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.634 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.088462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.088475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.088628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.088642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.088787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.088799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.088947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.088960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.089036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.089049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.089206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.089218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.089304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.089316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.089393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.089405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.089485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.089497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.089568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.089581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.089661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.089673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.089746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.089759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.089854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.089867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.089952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.089977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.090070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.090088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.090314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.090330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.090417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.090433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.090520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.090537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.090621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.090636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.090722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.090736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3738139 Killed "${NVMF_APP[@]}" "$@" 00:26:13.635 [2024-12-09 05:20:50.090878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.090890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.091044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.091057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.091157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.091170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.091317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.091330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.091410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.091422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.091502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.091514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:13.635 [2024-12-09 05:20:50.091660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.091673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.091769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.091781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.091859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.091872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:13.635 [2024-12-09 05:20:50.092012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.092026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.092175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.092188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:13.635 [2024-12-09 05:20:50.092266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.092279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.092449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.092462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:13.635 [2024-12-09 05:20:50.092556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.092569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.092654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.092668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.635 [2024-12-09 05:20:50.092745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.635 [2024-12-09 05:20:50.092758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.635 qpair failed and we were unable to recover it. 00:26:13.636 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:13.636 [2024-12-09 05:20:50.092834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.092847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.093031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.093050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.093153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.093170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.093257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.093273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.093444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.093464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.093550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.093565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.093722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.093737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.093818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.093834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.094005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.094018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.094096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.094108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.094199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.094210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.094354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.094366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.094444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.094455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.094609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.094620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.094788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.094802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.094987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.095002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.095086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.095097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.095183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.095195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.095288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.095299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.095442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.095453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.095556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.095569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.095734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.095746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.095897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.095908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.096044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.096056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.096144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.096155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.096332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.096351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.096507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.096520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.096596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.096608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.096701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.096714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.096869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.096883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.096986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.097003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.097195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.097207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.636 qpair failed and we were unable to recover it. 00:26:13.636 [2024-12-09 05:20:50.097360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.636 [2024-12-09 05:20:50.097370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.097471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.097481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.097565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.097577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.097729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.097740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.097837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.097849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.097993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.098013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.098119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.098131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.098274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.098287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.098380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.098393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.098562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.098583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.098674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.098691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.098792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.098810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.098969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.098986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.099083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.099100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.099191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.099208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.099307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.099324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.099420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.099439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.099546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.099563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3738966 00:26:13.637 [2024-12-09 05:20:50.099724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.099744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.099817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.099834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.099921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.099938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.100032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.100049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3738966 00:26:13.637 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:13.637 [2024-12-09 05:20:50.100197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.100210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.100306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.100319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3738966 ']' 00:26:13.637 [2024-12-09 05:20:50.100531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.100546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.100632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.100645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.637 [2024-12-09 05:20:50.100816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.100831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.100922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.100934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.101086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.101100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.637 [2024-12-09 05:20:50.101247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.101261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.101347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.101362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.637 [2024-12-09 05:20:50.101520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.101535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 [2024-12-09 05:20:50.101617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.637 [2024-12-09 05:20:50.101635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.637 qpair failed and we were unable to recover it. 00:26:13.637 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.637 [2024-12-09 05:20:50.101754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.101772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.101877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.101893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:13.638 [2024-12-09 05:20:50.102047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.102067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.102136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.102152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.102234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.102250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.102333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.102349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.102514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.102531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.102619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.102635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.102732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.102748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.102836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.102854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.103016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.103033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.103134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.103151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.103241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.103257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.103342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.103358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.103584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.103600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.103769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.103786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.103951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.103970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.104060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.104077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.104300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.104317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.104417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.104433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.104526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.104542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.104685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.104701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.104781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.104797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.104880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.104897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.105005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.105022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.105107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.105124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.105213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.105231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.105312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.105328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.105408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.105425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.105594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.105610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.105685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.105700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.105782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.105796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.105951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.105964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.106071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.106086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.106194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.106208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.106272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.106284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.106451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.106464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.638 [2024-12-09 05:20:50.106525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.638 [2024-12-09 05:20:50.106537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.638 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.106630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.106646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.106811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.106823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.106883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.106895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.106984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.106996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.107084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.107096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.107180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.107192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.107343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.107356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.107430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.107442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.107525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.107537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.107618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.107630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.107712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.107725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.107907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.107919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.108025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.108039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.108116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.108128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.108194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.108207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.108292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.108304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.108402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.108416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.108505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.108518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.108613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.108626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.108706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.108719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.108874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.108887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.109037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.109054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.109131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.109144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.109341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.109354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.109441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.109455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.109542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.109557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.109642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.109657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.109820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.109833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.109915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.109929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.110019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.110033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.110721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.110744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.110918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.110931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.111151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.111164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.111269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.111284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.111418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.111431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.111607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.111622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.111790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.111803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.639 qpair failed and we were unable to recover it. 00:26:13.639 [2024-12-09 05:20:50.111912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.639 [2024-12-09 05:20:50.111925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.112090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.112103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.112248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.112260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.112490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.112506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.112688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.112700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.112907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.112920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.113100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.113113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.113192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.113204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.113404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.113417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.113514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.113527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.113756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.113770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.113889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.113902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.114103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.114117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.114259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.114272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.114374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.114386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.114505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.114517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.114675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.114688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.114842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.114857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.114936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.114949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.115061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.115074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.115177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.115190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.115408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.115421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.115606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.115619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.115770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.115783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.115940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.115954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.116129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.116142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.116272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.116285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.116471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.116484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.116600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.116612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.116715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.116728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.116827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.116841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.116949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.116963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.117068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.117081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.117184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.117197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.117330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.117342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.640 [2024-12-09 05:20:50.117477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.640 [2024-12-09 05:20:50.117489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.640 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.117665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.117678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.117779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.117792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.117949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.117962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.118119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.118134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.118305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.118318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.118468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.118480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.118557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.118570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.118745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.118760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.118842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.118854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.118916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.118928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.119088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.119101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.119275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.119288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.119422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.119434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.119512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.119524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.119735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.119748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.119841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.119854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.119955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.119967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.120059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.120072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.120154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.120169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.120259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.120273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.120354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.120366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.120541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.120554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.120787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.120801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.120898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.120911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.121067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.121081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.121293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.121308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.121431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.121444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.121600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.121612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.121704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.641 [2024-12-09 05:20:50.121717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.641 qpair failed and we were unable to recover it. 00:26:13.641 [2024-12-09 05:20:50.121855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.121868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.121973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.121986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.122082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.122095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.122179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.122190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.122289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.122301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.122396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.122411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.122572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.122585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.122720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.122733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.122811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.122824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.122911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.122924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.122995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.123027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.123124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.123136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.123289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.123302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.123373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.123385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.123464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.123478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.123582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.123595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.123701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.123714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.123804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.123817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.123967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.123983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.124088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.124101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.124191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.124204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.124288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.124300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.124389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.124401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.124545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.124559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.124711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.124723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.124822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.124835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.124916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.124928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.125078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.125091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.125181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.125193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.125360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.125373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.125464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.125476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.125573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.125586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.125672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.125685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.125769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.125782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.125887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.125899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.642 [2024-12-09 05:20:50.125972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.642 [2024-12-09 05:20:50.125985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.642 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.126072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.126085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.126186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.126198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.126284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.126297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.126372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.126385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.126526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.126538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.126687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.126700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.126799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.126811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.126906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.126918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.127003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.127016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.127106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.127119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.127205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.127217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.127294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.127306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.127449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.127462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.127563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.127575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.127792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.127808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.127888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.127900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.128014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.128028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.128119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.128132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.128211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.128223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.128307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.128319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.128415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.128428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.128511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.128523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.128682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.128697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.128850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.128868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.128959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.128971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.129053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.129067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.129145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.129158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.129304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.129317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.129402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.129415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.129575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.129587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.129667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.129680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.129762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.129774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.129848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.129861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.130013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.130027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.130101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.130114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.643 [2024-12-09 05:20:50.130204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.643 [2024-12-09 05:20:50.130220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.643 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.130312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.130325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.130475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.130488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.130631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.130643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.130742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.130755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.130836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.130849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.131066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.131087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.131185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.131199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.131287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.131300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.131395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.131410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.131561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.131573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.131666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.131679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.131825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.131837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.131995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.132011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.132114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.132135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.132298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.132315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.132407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.132424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.132521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.132538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.132786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.132803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.132898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.132915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.132994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.133018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.133183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.133199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.133272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.133289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.133426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.133443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.133633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.133650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.133729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.133746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.133846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.133864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.133960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.133977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.134099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.134118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.134269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.134286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.134387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.134404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.134579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.134595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.134759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.134776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.134907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.134924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.135010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.135027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.644 [2024-12-09 05:20:50.135186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.644 [2024-12-09 05:20:50.135202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.644 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.135291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.135308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.135472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.135488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.135635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.135653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.135737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.135753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.135838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.135855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.135942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.135962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.136142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.136155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.136298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.136311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.136419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.136433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.136513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.136526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.136621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.136635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.136732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.136745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.136837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.136850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.136993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.137011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.137160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.137173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.137310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.137323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.137416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.137428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.137598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.137613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.137692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.137705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.137825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.137838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.137939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.137952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.138114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.138127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.138301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.138314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.138476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.138488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.138579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.138594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.138672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.138685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.138838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.138851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.138941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.138953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.139043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.139056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.139148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.139160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.139241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.139254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.139408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.139420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.139504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.139516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.139598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.645 [2024-12-09 05:20:50.139610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.645 qpair failed and we were unable to recover it. 00:26:13.645 [2024-12-09 05:20:50.139750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.139764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.139856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.139868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.139953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.139966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.140059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.140072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.140161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.140173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.140840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.140863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.141058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.141071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.141158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.141170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.141271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.141283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.141458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.141471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.141628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.141641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.141834] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:26:13.646 [2024-12-09 05:20:50.141885] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.646 [2024-12-09 05:20:50.141891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.141906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.142041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.142053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.142216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.142226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.142370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.142380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.142529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.142540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.142701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.142714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.142874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.142886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.142961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.142975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.143147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.143161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.143238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.143251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.143415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.143427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.143580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.143592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.143753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.143770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.143848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.143862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.144008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.144028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.144132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.144146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.144296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.144309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.144412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.144426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.646 qpair failed and we were unable to recover it. 00:26:13.646 [2024-12-09 05:20:50.144516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.646 [2024-12-09 05:20:50.144529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.144618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.144631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.144705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.144719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.144800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.144814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.145035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.145050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.145118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.145132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.145360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.145373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.146026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.146049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.146240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.146254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.146425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.146438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.146541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.146553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.146614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.146626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.146791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.146804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.146887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.146899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.146985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.147014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.147101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.147113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.147254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.147266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.147381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.147394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.147481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.147493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.147584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.147596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.147680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.147693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.147876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.147909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.148023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.148043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.148149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.148166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.647 qpair failed and we were unable to recover it. 00:26:13.647 [2024-12-09 05:20:50.148248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.647 [2024-12-09 05:20:50.148264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.148364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.148381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.148481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.148498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.148595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.148611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.148708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.148724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.148812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.148828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.148984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.149008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.149134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.149150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.149238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.149254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.149348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.149364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.149609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.149630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.149732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.149749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.149833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.149850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.150023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.150040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.150194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.150211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.150305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.150322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.150409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.150426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.150523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.150540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.150760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.150778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.150867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.150884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.150982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.151004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.151114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.151131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.151218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.151235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.151335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.151352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.151523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.151540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.151629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.151645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.151816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.151836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.151943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.151959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.152138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.152156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.648 qpair failed and we were unable to recover it. 00:26:13.648 [2024-12-09 05:20:50.152263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.648 [2024-12-09 05:20:50.152280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.152380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.152397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.152482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.152498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.152581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.152597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.152692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.152709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.152808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.152821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.152962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.152976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.153083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.153096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.153268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.153282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.153370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.153383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.153480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.153494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.153651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.153665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.153747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.153764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.153843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.153855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.153951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.153963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.154108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.154121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.154204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.154217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.154325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.154338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.154428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.154440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.154520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.154533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.154609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.154622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.154708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.154723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.154879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.154894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.154981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.154994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.155088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.155101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.155185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.155198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.155294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.155307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.155382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.155395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.155494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.649 [2024-12-09 05:20:50.155508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-12-09 05:20:50.155669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.155682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.155763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.155776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.155862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.155875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.155966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.155980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.156047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.156059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.156138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.156152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.156234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.156246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.156317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.156330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.156394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.156407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.156481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.156495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.156580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.156592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.156678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.156691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.156778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.156791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.156871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.156885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.157028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.157046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.157151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.157165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.157263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.157276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.157366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.157378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.157521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.157535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.157628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.157647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.157742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.157759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.157861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.157879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.157964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.157981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.158086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.158105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.158193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.158211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.158299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.158316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.158418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.158434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.158599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.650 [2024-12-09 05:20:50.158616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-12-09 05:20:50.158719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.158737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.158824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.158842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.158932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.158950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.159054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.159072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.159168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.159184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.159356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.159374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.159464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.159481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.159575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.159592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.159676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.159693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.159802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.159819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.159968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.159986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.160085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.160102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.160193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.160210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.160297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.160314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.160420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.160438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.160594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.160611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.160765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.160782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.160877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.160894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.160984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.161008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.161164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.161181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.161269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.161287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.161383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.161401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.161565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.161583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.161671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.161689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.161846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.161863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.161948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.161965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.162065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.162083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.162165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.162181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.162332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.651 [2024-12-09 05:20:50.162348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.651 [2024-12-09 05:20:50.162435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.162451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.162540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.162557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.162648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.162665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.162838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.162856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.162940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.162957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.163031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.163050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.163142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.163158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.163261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.163278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.163363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.163380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.163464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.163481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.163634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.163651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.163747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.163764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.163850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.163867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.163954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.163971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.164095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.164113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.164198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.164214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.164304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.164323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.164481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.164497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.164581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.164597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.164684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.164701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.164855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.164873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.164954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.164971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.165073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.652 [2024-12-09 05:20:50.165092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.652 qpair failed and we were unable to recover it. 00:26:13.652 [2024-12-09 05:20:50.165254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.165271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.165371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.165388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.165549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.165566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.165645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.165662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.165751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.165767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.165863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.165880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.165966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.165983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.166155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.166173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.166259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.166276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.166436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.166453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.166661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.166679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.166778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.166799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.166885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.166899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.166974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.166987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.167075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.167088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.167262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.167275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.167386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.167408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.167504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.167524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.167632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.167651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.167752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.167765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.167851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.167867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.167957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.167971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.168187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.168201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.168297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.168310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.168391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.168404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.168480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.168493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.168570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.168584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.168650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.168663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.168755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.168768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.168845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.168858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.168942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.653 [2024-12-09 05:20:50.168956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.653 qpair failed and we were unable to recover it. 00:26:13.653 [2024-12-09 05:20:50.169050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.169063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.169204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.169217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.169306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.169319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.169404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.169417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.169506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.169519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.169601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.169614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.169679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.169692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.169775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.169787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.169875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.169888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.169977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.169991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.170090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.170103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.170187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.170200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.170279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.170291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.170408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.170420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.170501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.170513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.170608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.170621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.170718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.170739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.170837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.170854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.170941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.170958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.171046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.171062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.171145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.171157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.171234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.171246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.171336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.171349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.171504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.171517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.171597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.171610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.171685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.171697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.171769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.171782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.171858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.171871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.654 [2024-12-09 05:20:50.171954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.654 [2024-12-09 05:20:50.171966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.654 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.172046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.172062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.172138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.172152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.172232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.172244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.172323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.172336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.172428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.172441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.172514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.172526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.172621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.172633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.172715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.172728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.172802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.172815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.172902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.172915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.173063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.173077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.173154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.173167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.173263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.173277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.173379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.173392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.173465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.173478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.173624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.173636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.173732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.173745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.173821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.173833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.173971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.173984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.174065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.174079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.174226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.174239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.174355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.174369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.174439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.174452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.174547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.174559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.174709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.174722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.174801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.174813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.174889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.174902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.655 qpair failed and we were unable to recover it. 00:26:13.655 [2024-12-09 05:20:50.174969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.655 [2024-12-09 05:20:50.174988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.175100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.175116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.175200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.175216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.175305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.175321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.175417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.175435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.175524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.175541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.175629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.175645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.175791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.175808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.175924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.175940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.176093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.176111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.176203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.176219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.176324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.176341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.176503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.176519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.176605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.176621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.176718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.176734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.176818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.176834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.176926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.176942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.177041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.177059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.177224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.177242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.177463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.177479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.177563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.177581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.177664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.177681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.177771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.177788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.177900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.177916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.178014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.178031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.178122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.178138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.178226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.178242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.178342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.178361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.178505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.178521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.656 [2024-12-09 05:20:50.178630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.656 [2024-12-09 05:20:50.178646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.656 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.178749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.178766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.178868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.178885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.179047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.179065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.179230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.179246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.179342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.179359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.179449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.179465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.179534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.179550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.179650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.179667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.179756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.179773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.179858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.179874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.180025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.180044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.180149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.180165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.180281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.180298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.180514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.180531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.180689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.180706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.180874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.180890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.180984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.181005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.181102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.181118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.181209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.181225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.181337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.181354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.181493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.181509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.181604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.181621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.181723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.181739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.181878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.181895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.182005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.182025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.182183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.182200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.182304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.182322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.182496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.182513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.182593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.182610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.182695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.182713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.657 qpair failed and we were unable to recover it. 00:26:13.657 [2024-12-09 05:20:50.182805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.657 [2024-12-09 05:20:50.182822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.182922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.182939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.183024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.183041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.183128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.183145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.183253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.183269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.183365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.183381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.183478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.183494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.183583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.183599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.183758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.183774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.183868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.183884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.183989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.184013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.184101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.184118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.184336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.184352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.184517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.184532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.184622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.184640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.184736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.184752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.184858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.184875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.184987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.185010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.185111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.185128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.185220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.185236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.185389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.185406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.185589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.185608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.185680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.185696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.185865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.185882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.185981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.186004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.658 [2024-12-09 05:20:50.186088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.658 [2024-12-09 05:20:50.186104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.658 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.186196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.186212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.186368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.186385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.186540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.186557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.186657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.186674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.186760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.186776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.186895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.186912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.187068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.187086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.187203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.187220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.187325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.187342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.187428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.187445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.187613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.187630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.187783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.187799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.187890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.187907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.187990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.188016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.188172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.188190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.188338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.188355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.188507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.188524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.188617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.188633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.188790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.188808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.188908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.188925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.189035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.189053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.189159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.189176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.189266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.189282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.189456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.189473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.189577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.189593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.189683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.189699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.189804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.189821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.189894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.189910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.659 [2024-12-09 05:20:50.190009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.659 [2024-12-09 05:20:50.190026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.659 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.190113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.190130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.190364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.190381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.190489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.190506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.190607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.190623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.190795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.190812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.190919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.190935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.191038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.191056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.191164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.191188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.191292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.191308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.191402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.191419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.191578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.191594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.191700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.191716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.191821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.191838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.191923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.191940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.192040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.192058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.192159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.192178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.192270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.192286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.192441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.192457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.192546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.192562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.192636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.192651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.192738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.192759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.192918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.192935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.193027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.193045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.193148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.193164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.193263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.193279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.193364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.193380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.193471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.193487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.193578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.193595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.193693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.193710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.660 [2024-12-09 05:20:50.193861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.660 [2024-12-09 05:20:50.193877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.660 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.194056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.194073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.194169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.194185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.194270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.194287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.194478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.194494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.194585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.194601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.194755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.194771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.194872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.194889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.194987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.195008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.195097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.195114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.195215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.195232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.195332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.195349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.195440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.195456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.195609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.195626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.195717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.195733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.195821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.195845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.196009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.196022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.196118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.196130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.196327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.196345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.196441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.196457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.196535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.196551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.196653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.196669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.196828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.196844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.196949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.196965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.197121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.197138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.197230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.197246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.197330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.197346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.197437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.197454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.197552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.197568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.197663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.197679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.661 [2024-12-09 05:20:50.197832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.661 [2024-12-09 05:20:50.197848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.661 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.197939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.197955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.198038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.198054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.198140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.198156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.198243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.198259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.198350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.198365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.198459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.198475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.198625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.198637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.198802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.198814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.198921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.198933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.199025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.199038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.199126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.199142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.199243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.199256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.199335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.199347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.199490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.199502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.199584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.199596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.199681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.199694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.199812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.199824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.199915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.199928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.200008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.200022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.200233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.200247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.200336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.200349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.200494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.200506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.200581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.200594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.200744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.200757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.200837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.200849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.200929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.200941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.201026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.201040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.201192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.201209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.201372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.662 [2024-12-09 05:20:50.201386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.662 qpair failed and we were unable to recover it. 00:26:13.662 [2024-12-09 05:20:50.201478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.201491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.201575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.201587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.201662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.201674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.201754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.201767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.201855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.201868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.201945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.201958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.202042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.202055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.202115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.202128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.202267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.202279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.202419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.202433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.202510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.202523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.202611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.202623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.202790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.202802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.202881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.202894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.202971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.202984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.203139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.203153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.203319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.203331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.203476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.203489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.203582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.203595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.203681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.203694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.203776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.203788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.203952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.203965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.204074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.204088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.204183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.204195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.204373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.204385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.204482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.204495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.204577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.204591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.204688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.204702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.204779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.204791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.663 [2024-12-09 05:20:50.204887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.663 [2024-12-09 05:20:50.204900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.663 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.205029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.205042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.205202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.205215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.205297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.205310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.205395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.205406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.205514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.205527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.205677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.205690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.205909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.205922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.205991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.206008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.206086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.206102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.206178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.206191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.206401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.206413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.206518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.206530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.206618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.206631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.206717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.206731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.206812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.206824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.206911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.206923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.207006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.207019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.207103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.207116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.207194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.207207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.207350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.207363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.207442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.207454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.207664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.207676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.207830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.207844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.207937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.664 [2024-12-09 05:20:50.207950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.664 qpair failed and we were unable to recover it. 00:26:13.664 [2024-12-09 05:20:50.208054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.208067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.208149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.208162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.208310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.208322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.208415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.208427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.208590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.208602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.208748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.208761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.208902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.208916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.209012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.209025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.209172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.209185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.209276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.209288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.209372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.209385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.209543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.209557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.209725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.209737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.209850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.209862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.209958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.209971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.210050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.210062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.210143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.210156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.210304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.210316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.210402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.210415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.210570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.210582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.210661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.210674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.210900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.210912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.211067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.211081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.211161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.211173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.211276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.211291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.211448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.211461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.211554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.211566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.211664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.211677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.211767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.665 [2024-12-09 05:20:50.211780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.665 qpair failed and we were unable to recover it. 00:26:13.665 [2024-12-09 05:20:50.211935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.211947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.212049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.212062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.212149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.212163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.212309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.212323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.212416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.212428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.212574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.212587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.212661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.212673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.212760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.212774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.212853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.212865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.212950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.212963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.213050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.213063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.213155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.213167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.213327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.213341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.213431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.213444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.213527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.213540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.213686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.213699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.213849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.213861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.214034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.214047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.214123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.214136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.214238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.214250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.214396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.214410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.214586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.214598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.214698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.214711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.214868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.214881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.214976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.214989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.215072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.215084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.215224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.215236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.215331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.215344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.215442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.215455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.215595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.666 [2024-12-09 05:20:50.215607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.666 qpair failed and we were unable to recover it. 00:26:13.666 [2024-12-09 05:20:50.215771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.215783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.215866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.215878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.215965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.215978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.216057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.216070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.216252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.216265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.216354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.216369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.216473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.216489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.216575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.216587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.216685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.216698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.216772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.216784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.216862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.216875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.216958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.216971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.217052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.217065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.217227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.217239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.217382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.217394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.217542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.217555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.217633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.217646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.217736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.217748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.217898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.217910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.218068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.218081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.218168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.218180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.218345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.218357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.218446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.218459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.218605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.218618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.218705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.218718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.218813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.218826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.218934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.218947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.219062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.219075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.667 qpair failed and we were unable to recover it. 00:26:13.667 [2024-12-09 05:20:50.219162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.667 [2024-12-09 05:20:50.219174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:13.668 [2024-12-09 05:20:50.219260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-12-09 05:20:50.219274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:13.668 [2024-12-09 05:20:50.219420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-12-09 05:20:50.219433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:13.668 [2024-12-09 05:20:50.219611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-12-09 05:20:50.219624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:13.668 [2024-12-09 05:20:50.219793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-12-09 05:20:50.219808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:13.668 [2024-12-09 05:20:50.219893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-12-09 05:20:50.219905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:13.668 [2024-12-09 05:20:50.219986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-12-09 05:20:50.220004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:13.668 [2024-12-09 05:20:50.220091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-12-09 05:20:50.220104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:13.668 [2024-12-09 05:20:50.220271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-12-09 05:20:50.220284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:13.668 [2024-12-09 05:20:50.220357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-12-09 05:20:50.220369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:13.668 [2024-12-09 05:20:50.220444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-12-09 05:20:50.220456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:13.668 [2024-12-09 05:20:50.220595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-12-09 05:20:50.220608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:13.668 [2024-12-09 05:20:50.220684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-12-09 05:20:50.220697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:13.668 [2024-12-09 05:20:50.220805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-12-09 05:20:50.220820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:13.668 [2024-12-09 05:20:50.220902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-12-09 05:20:50.220914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:13.668 [2024-12-09 05:20:50.221081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-12-09 05:20:50.221095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:13.668 [2024-12-09 05:20:50.221172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-12-09 05:20:50.221185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:13.668 [2024-12-09 05:20:50.221346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-12-09 05:20:50.221361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:13.668 [2024-12-09 05:20:50.221442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-12-09 05:20:50.221454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:13.668 [2024-12-09 05:20:50.221532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-12-09 05:20:50.221544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:13.668 [2024-12-09 05:20:50.221645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-12-09 05:20:50.221658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:13.668 [2024-12-09 05:20:50.221803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-12-09 05:20:50.221815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:13.668 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.221887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.221900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.221982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.221995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.222082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.222094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.222200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.222213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.222330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.222342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.222420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.222432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.222528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.222541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.222712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.222725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.222868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.222881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.222964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.222978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.223126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.223140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.223283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.223296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.223532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.223545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.223625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.223638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.223733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.223746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.223833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.223846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.223922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.223934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.224036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.224050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.224131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.224143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.224240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.224254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.224361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.224374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.224531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.224543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.224633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.224646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.033 [2024-12-09 05:20:50.224640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.224761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.033 [2024-12-09 05:20:50.224783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.033 qpair failed and we were unable to recover it. 00:26:14.033 [2024-12-09 05:20:50.224873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.224890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.225041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.225058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.225166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.225182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.225277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.225293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.225388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.225404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.225568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.225584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.225680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.225697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.225789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.225806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.225887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.225904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.225996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.226019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.226193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.226210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.226319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.226335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.226430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.226445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.226605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.226621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.226719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.226737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.226978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.226995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.227112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.227135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.227237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.227252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.227414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.227430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.227534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.227551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.227738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.227754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.227854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.227871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.227984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.228005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.228092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.228109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.228229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.228249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.228333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.228349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.228488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.228505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.228598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.228615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.228797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.228814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.228934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.228951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.229035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.229052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.229213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.229230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.229392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.229408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.229495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.229512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.229628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.229643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.229750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.229765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.229876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.229892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.229991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.034 [2024-12-09 05:20:50.230013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.034 qpair failed and we were unable to recover it. 00:26:14.034 [2024-12-09 05:20:50.230126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.230143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.230295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.230312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.230470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.230487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.230592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.230609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.230694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.230711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.230803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.230820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.230909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.230926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.231014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.231031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.231162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.231179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.231329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.231348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.231440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.231456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.231547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.231564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.231647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.231664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.231787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.231817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.231993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.232022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.232113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.232130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.232215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.232232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.232326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.232342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.232503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.232520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.232610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.232627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.232724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.232741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.232842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.232859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.232952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.232969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.233071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.233089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.233202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.233219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.233324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.233341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.233435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.233455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.233545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.233561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.233660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.233676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.233774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.233791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.233884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.233901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.234073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.234093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.234195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.234211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.234312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.234329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.234435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.234452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.234608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.234625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.234707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.234723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.234817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.035 [2024-12-09 05:20:50.234832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.035 qpair failed and we were unable to recover it. 00:26:14.035 [2024-12-09 05:20:50.234996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.235018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.235109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.235126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.235206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.235223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.235340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.235357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.235447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.235464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.235573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.235591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.235673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.235691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.235851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.235875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.235986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.236012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.236109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.236127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.236330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.236348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.236438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.236455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.236538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.236555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.236644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.236660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.236744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.236761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.236863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.236891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.237059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.237078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.237169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.237185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.237268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.237284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.237455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.237473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.237623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.237640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.237818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.237834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.237929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.237946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.238050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.238067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.238225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.238238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.238329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.238342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.238428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.238441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.238533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.238546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.238649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.238669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.238764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.238777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.238866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.238880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.238971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.238984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.239097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.239113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.239193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.239206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.239351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.239365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.239508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.239521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.239618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.239631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.036 [2024-12-09 05:20:50.239732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.036 [2024-12-09 05:20:50.239745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.036 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.239903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.239917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.239993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.240015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.240108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.240121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.240219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.240234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.240389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.240402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.240547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.240560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.240642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.240654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.240804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.240817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.240911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.240923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.241005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.241019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.241161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.241174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.241264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.241281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.241468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.241481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.241581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.241594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.241679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.241692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.241851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.241864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.242026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.242039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.242148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.242169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.242320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.242337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.242424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.242440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.242544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.242561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.242730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.242748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.242851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.242867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.243054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.243074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.243171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.243188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.243308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.243325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.243419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.243436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.243534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.243551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.243659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.243676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.243765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.243782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.243886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.243903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.244136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.244156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.244311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.037 [2024-12-09 05:20:50.244329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.037 qpair failed and we were unable to recover it. 00:26:14.037 [2024-12-09 05:20:50.244423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.244440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.244523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.244540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.244643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.244660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.244815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.244833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.244982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.245005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.245104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.245122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.245234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.245252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.245423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.245441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.245614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.245631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.245788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.245806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.245904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.245920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.246033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.246054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.246160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.246177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.246275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.246293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.246378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.246394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.246564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.246581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.246667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.246686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.246907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.246924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.247025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.247043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.247208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.247226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.247381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.247398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.247555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.247573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.247668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.247685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.247781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.247798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.247964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.247981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.248157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.248174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.248265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.248282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.248435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.248458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.248553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.248570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.248736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.248754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.248925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.248943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.249052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.249071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.249226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.249245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.249405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.249422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.249573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.249592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.249747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.249765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.249853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.249869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.250024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.250041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.038 qpair failed and we were unable to recover it. 00:26:14.038 [2024-12-09 05:20:50.250152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.038 [2024-12-09 05:20:50.250172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.250272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.250290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.250402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.250420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.250521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.250539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.250697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.250714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.250816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.250832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.250984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.251006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.251116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.251133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.251221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.251237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.251392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.251409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.251483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.251500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.251680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.251696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.251764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.251780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.251878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.251895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.252015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.252032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.252186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.252204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.252311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.252329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.252446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.252464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.252626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.252644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.252730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.252747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.252820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.252837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.252922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.252940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.253057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.253075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.253229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.253246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.253464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.253482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.253586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.253604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.253691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.253709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.253803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.253826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.253934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.253951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.254100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.254118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.254286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.254304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.254463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.254480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.254634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.254651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.254751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.254768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.254864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.254881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.254982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.255004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.255176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.255194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.255334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.255350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.255460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.039 [2024-12-09 05:20:50.255478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.039 qpair failed and we were unable to recover it. 00:26:14.039 [2024-12-09 05:20:50.255697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.255715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.255804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.255821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.255930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.255951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.256057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.256076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.256150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.256168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.256273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.256286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.256379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.256391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.256480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.256495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.256606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.256620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.256716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.256729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.256821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.256834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.256996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.257016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.257180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.257193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.257278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.257292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.257391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.257404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.257554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.257571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.257716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.257730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.257816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.257839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.257915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.257928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.258006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.258020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.258173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.258186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.258264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.258278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.258356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.258369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.258459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.258480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.258591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.258608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.258708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.258722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.258816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.258829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.258910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.258923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.259134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.259147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.259260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.259274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.259419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.259431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.259581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.259594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.259683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.259699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.259800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.259814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.259912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.259926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.260073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.260088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.260270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.260283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.040 [2024-12-09 05:20:50.260362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.040 [2024-12-09 05:20:50.260375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.040 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.260462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.260475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.260556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.260570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.260651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.260664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.260742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.260756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.260866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.260887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.260975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.260993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.261083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.261100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.261202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.261219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.261388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.261405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.261560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.261577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.261671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.261688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.261779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.261796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.261885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.261901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.262052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.262069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.262182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.262199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.262358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.262376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.262475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.262491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.262707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.262728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.262879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.262896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.262989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.263011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.263109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.263127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.263225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.263243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.263338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.263356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.263448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.263465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.263550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.263566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.263657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.263673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.263831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.263844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.263942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.263955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.264052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.264066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.264170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.264183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.264340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.264352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.264440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.264454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.264550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.264563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.264715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.264728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.264890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.264903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.264995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.265023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.265108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.265121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.041 [2024-12-09 05:20:50.265311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.041 [2024-12-09 05:20:50.265325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.041 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.265534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.265548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.265628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.265641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.265741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.265755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.265846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.265860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.265957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.265971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.266115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.266129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.266303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.266318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.266407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.266419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.266502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.266515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.266691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.266704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.266855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.266868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.267014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.267028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.267188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.267201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.267365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.267381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.267525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.267539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.267622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.267635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.267802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.267815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.267908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.267921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.268083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.268097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.268181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.268197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.268365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.268378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.268482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.268496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.268730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.268743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.268827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.268840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.268917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.268930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.269095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.269126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.269229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.269242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.269396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.269410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.269549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.269564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.269641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.269654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.269736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.269749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.269936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.269950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.270137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.042 [2024-12-09 05:20:50.270150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.042 qpair failed and we were unable to recover it. 00:26:14.042 [2024-12-09 05:20:50.270263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.270283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.270413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.270427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.270506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.270520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.270672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.270686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.270788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.270801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.270902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.270915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.271019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.271033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.271195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.271208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.271295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.271307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.271463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.271475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.271565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.271578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.271745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.271759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.271848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.271861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.272042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.272063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.272237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.272255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.272405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.272422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.272525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.272541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.272627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.272644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.272748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.272764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.272868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.272886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.272980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.272996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.273155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.273173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.273276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.273293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.273400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.273417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.273587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.273605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.273696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.273721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.273828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.273846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.273934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.273951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.274023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.274053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.274166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.274183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.274284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.274302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.274405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.274421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.274526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.274543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.274632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.274649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.274736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.274752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.274916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.274934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.275152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.275170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.275264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.275283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.043 qpair failed and we were unable to recover it. 00:26:14.043 [2024-12-09 05:20:50.275526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.043 [2024-12-09 05:20:50.275544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.275660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.275677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.275780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.275801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.275892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.275910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.276083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.276101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.276288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.276306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.276395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.276412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.276511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.276528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.276615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.276632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.276807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.276824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.277006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.277024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.277126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.277144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.277240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.277258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.277410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.277427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.277508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.277526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.277611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.277628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.277818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.277836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.278074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.278093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.278197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.278215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.278304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.278321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.278405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.278422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.278510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.278527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.278635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.278652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.278741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.278759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.278919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.278937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.279038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.279056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.279275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.279293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.279451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.279450] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:14.044 [2024-12-09 05:20:50.279469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.279485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:14.044 [2024-12-09 05:20:50.279498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:14.044 [2024-12-09 05:20:50.279513] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:14.044 [2024-12-09 05:20:50.279521] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:14.044 [2024-12-09 05:20:50.279565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.279580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.279731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.279747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.279860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.279876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.280043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.280061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.280160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.280177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.280273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.280290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.280374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.280391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.280471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.280488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.044 [2024-12-09 05:20:50.280585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.044 [2024-12-09 05:20:50.280602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.044 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.280765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.280783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.280877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.280893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.280985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.281012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.281106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.281123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.281211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.281228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.281448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.281465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.281618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.281635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.281784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.281801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 [2024-12-09 05:20:50.281702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.281731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:14.045 [2024-12-09 05:20:50.281905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.281920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.281845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:14.045 [2024-12-09 05:20:50.281846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:14.045 [2024-12-09 05:20:50.282095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.282112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.282202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.282218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.282381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.282398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.282559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.282576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.282665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.282683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.282781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.282799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.282970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.282989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.283142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.283163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.283380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.283398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.283565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.283583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.283677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.283695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.283794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.283811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.283908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.283926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.284032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.284049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.284139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.284155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.284260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.284278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.284546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.284563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.284653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.284670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.284764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.284782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.284884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.284901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.285058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.285077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.285188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.285206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.285360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.285377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.285482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.285499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.285581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.285598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.285752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.285769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.285990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.286013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.045 [2024-12-09 05:20:50.286190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.045 [2024-12-09 05:20:50.286208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.045 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.286302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.286320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.286425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.286442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.286558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.286576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.286662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.286680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.286838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.286855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.286941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.286959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.287057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.287079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.287164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.287182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.287337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.287354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.287511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.287529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.287691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.287709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.287817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.287834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.287956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.287974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.288076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.288095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.288184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.288200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.288285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.288303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.288405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.288422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.288514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.288531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.288681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.288699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.288851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.288869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.289046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.289064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.289227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.289245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.289466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.289484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.289581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.289599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.289716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.289733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.289824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.289841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.290006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.290025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.290181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.290199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.290290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.290307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.290421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.290439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.290539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.290556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.290651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.290667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.290758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.290775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.290903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.290924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.291019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.291036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.291142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.291155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.046 qpair failed and we were unable to recover it. 00:26:14.046 [2024-12-09 05:20:50.291297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.046 [2024-12-09 05:20:50.291310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.291465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.291479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.291632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.291644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.291741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.291753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.291855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.291868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.292045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.292075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.292163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.292182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.292350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.292368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.292543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.292561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.292749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.292766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.292868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.292885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.293060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.293078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.293186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.293204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.293388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.293406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.293571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.293588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.293675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.293692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.293787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.293805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.293955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.293973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.294077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.294095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.294178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.294195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.294283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.294301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.294397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.294414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.294514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.294532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.294683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.294701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.294809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.294830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.294937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.294954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.295053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.295071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.295159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.295177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.295285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.295303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.295407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.295424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.295580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.295597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.295693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.295710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.295796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.295813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.295919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.295936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.296034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.296052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.296204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.296221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.296314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.047 [2024-12-09 05:20:50.296331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.047 qpair failed and we were unable to recover it. 00:26:14.047 [2024-12-09 05:20:50.296429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.296445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.296539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.296557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.296647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.296660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.296844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.296857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.297007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.297020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.297101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.297115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.297204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.297216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.297423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.297435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.297522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.297537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.297620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.297633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.297775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.297788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.297877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.297890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.298058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.298071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.298168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.298182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.298261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.298277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.298363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.298376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.298534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.298546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.298646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.298659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.298764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.298777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.298881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.298894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.299056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.299069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.299158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.299171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.299318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.299330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.299405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.299418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.299505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.299517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.299604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.299616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.299692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.299706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.299874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.299886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.300037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.300051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.300155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.300168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.300248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.300260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.300404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.300417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.300559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.300573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.300655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.300667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.300761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.300775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.300854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.300868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.300964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.300977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.301061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.048 [2024-12-09 05:20:50.301075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.048 qpair failed and we were unable to recover it. 00:26:14.048 [2024-12-09 05:20:50.301161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.301174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.301256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.301268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.301432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.301445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.301547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.301568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.301675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.301692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.301849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.301866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.302023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.302041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.302131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.302146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.302240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.302256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.302346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.302363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.302518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.302535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.302622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.302637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.302720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.302737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.302892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.302909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.303017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.303035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.303187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.303203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.303322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.303343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.303496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.303512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.303617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.303633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.303874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.303891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.303984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.304007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.304184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.304201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.304288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.304304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.304450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.304466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.304728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.304744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.304855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.304871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.304974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.304990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.305107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.305124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.305276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.305293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.305380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.305396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.305500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.305519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.305726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.305739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.305930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.305943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.306038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.306051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.306147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.306164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.306261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.306274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.306419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.306432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.306537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.049 [2024-12-09 05:20:50.306550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.049 qpair failed and we were unable to recover it. 00:26:14.049 [2024-12-09 05:20:50.306739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.306753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.306904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.306918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.307004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.307018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.307258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.307272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.307377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.307390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.307492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.307519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.307760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.307777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.307935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.307952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.308155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.308173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.308290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.308307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.308420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.308436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.308701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.308717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.308871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.308888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.309075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.309093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.309225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.309241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.309436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.309452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.309675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.309692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.309837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.309854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.310075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.310098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.310272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.310289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.310447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.310464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.310683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.310700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.310900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.310917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.311085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.311104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.311244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.311261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.311466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.311482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.311602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.311618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.311878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.311895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.312009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.312026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.312184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.312200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.312383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.312401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.312573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.312589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.312750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.312767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.312985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.313009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.313097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.313114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.313264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.313280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.313449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.313469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.050 [2024-12-09 05:20:50.313587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.050 [2024-12-09 05:20:50.313606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.050 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.313820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.313836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.314020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.314038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.314137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.314156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.314325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.314342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.314459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.314475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.314723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.314741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.314957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.314974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.315088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.315113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.315271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.315288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.315402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.315419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.315512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.315528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.315737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.315753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.315910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.315927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.316057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.316075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.316190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.316206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.316382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.316399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.316633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.316649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.316782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.316800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.316965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.316982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.317098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.317115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.317284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.317301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.317549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.317566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.317726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.317743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.317924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.317943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.318147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.318165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.318334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.318352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.318553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.318570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.318814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.318831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.319006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.319023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.319248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.319265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.319463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.319481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.319679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.319698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.319817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.319834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.320006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.051 [2024-12-09 05:20:50.320023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.051 qpair failed and we were unable to recover it. 00:26:14.051 [2024-12-09 05:20:50.320190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.320211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.320321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.320339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.320448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.320466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.320643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.320660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.320748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.320765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.320850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.320867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.321041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.321059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.321157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.321174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.321282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.321299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.321408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.321426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.321576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.321593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.321712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.321729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.321834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.321852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.322011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.322029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.322129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.322146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.322357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.322374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.322640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.322658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.322874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.322891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.323009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.323028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.323253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.323271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.323435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.323453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.323646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.323664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.323827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.323844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.324078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.324096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.324284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.324300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.324413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.324430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.324591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.324608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.324850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.324871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.325038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.325056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.325251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.325268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.325485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.325501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.325620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.325636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.325783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.325799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.325911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.325928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.326081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.326098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.326221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.326238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.326343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.326359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.326474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.326491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.052 [2024-12-09 05:20:50.326673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.052 [2024-12-09 05:20:50.326690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.052 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.326935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.326951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.327066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.327083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.327306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.327323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.327491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.327508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.327600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.327617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.327713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.327729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.327827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.327844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.327938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.327955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.328173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.328190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.328283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.328299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.328470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.328487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.328663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.328681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.328900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.328917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.329101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.329119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.329305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.329322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.329425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.329442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.329670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.329687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.329858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.329875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.329968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.329985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.330213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.330231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.330420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.330437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.330681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.330698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.330937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.330954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.331067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.331083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.331236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.331253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.331473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.331491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.331730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.331747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.331911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.331928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.332159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.332177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.332290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.332314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.332478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.332491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.332634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.332647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.332803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.332817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.333058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.333073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.333238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.333253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.333400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.333413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.333647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.333660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.333916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.053 [2024-12-09 05:20:50.333929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.053 qpair failed and we were unable to recover it. 00:26:14.053 [2024-12-09 05:20:50.334189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.334204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.334396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.334410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.334666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.334679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.334838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.334850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.334947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.334963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.335126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.335140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.335284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.335297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.335520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.335534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.335770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.335782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.335946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.335960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.336187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.336202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.336341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.336354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.336525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.336539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.336680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.336694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.336852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.336866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.337025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.337039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.337185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.337198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.337298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.337310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.337405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.337419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.337639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.337653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.337842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.337856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.338086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.338100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.338288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.338301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.338408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.338421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.338574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.338587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.338827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.338842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.338932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.338945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.339051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.339065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.339183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.339196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.339275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.339288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.339387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.339400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.339483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.339496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.339673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.339687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.339777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.339791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.339895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.339909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.340086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.340099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.340260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.340274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.340485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.054 [2024-12-09 05:20:50.340500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.054 qpair failed and we were unable to recover it. 00:26:14.054 [2024-12-09 05:20:50.340755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.340771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.340988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.341006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.341248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.341262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.341413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.341426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.341535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.341548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.341703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.341716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.341876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.341894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.342056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.342069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.342232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.342246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.342391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.342403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.342558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.342571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.342661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.342674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.342887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.342902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.343053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.343067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.343315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.343327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.343561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.343574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.343784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.343797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.344033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.344051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.344167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.344180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.344410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.344424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.344593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.344606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.344761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.344773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.344983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.344995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.345178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.345192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.345402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.345415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.345557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.345570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.345732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.345744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.345977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.345989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.346215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.346231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.346405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.346418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.346665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.346678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.346883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.346895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.347126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.347139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.347418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.347449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.347685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.347703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.347861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.347877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.348041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.348059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.348240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.055 [2024-12-09 05:20:50.348255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.055 qpair failed and we were unable to recover it. 00:26:14.055 [2024-12-09 05:20:50.348474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.348491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.348758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.348775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.349015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.349031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.349225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.349242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.349530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.349547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.349764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.349780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.349950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.349967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.350164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.350181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.350366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.350387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.350541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.350557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.350731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.350747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.350917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.350933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.351091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.351108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.351333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.351349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.351506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.351523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.351675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.351691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.351869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.351886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.352059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.352076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.352194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.352210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.352428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.352448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.352666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.352683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.352850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.352866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.353119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.353136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.353305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.353322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.353502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.353519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.353681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.353698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.353914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.353930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.354178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.354194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.354422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.354438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.354655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.354672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.354901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.354917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.355099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.355116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.355294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.355311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.355469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.355485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.056 qpair failed and we were unable to recover it. 00:26:14.056 [2024-12-09 05:20:50.355728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.056 [2024-12-09 05:20:50.355744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.355963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.355986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.356244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.356261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.356426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.356442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.356689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.356705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.356903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.356920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.357175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.357192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.357364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.357380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.357604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.357621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.357880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.357896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.358051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.358068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.358284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.358300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.358476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.358492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.358714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.358730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.358887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.358904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.359010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.359026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.359192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.359209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.359377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.359393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.359611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.359628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.359792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.359808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.359923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.359940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.360106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.360124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.360367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.360383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.360599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.360615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.360784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.360802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.360964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.360981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.361220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.361237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.361400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.361417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.361654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.361673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.361836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.361853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.362073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.362091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.362320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.362337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.362604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.362621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.362780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.362797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.362977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.362993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.363154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.363170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.363396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.363412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.363585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.363601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.057 [2024-12-09 05:20:50.363843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.057 [2024-12-09 05:20:50.363860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.057 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.364102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.364119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.364381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.364398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.364554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.364570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.364742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.364759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.364965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.364982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.365261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.365288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.365547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.365563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.365774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.365790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.366058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.366075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.366246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.366262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.366548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.366564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.366750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.366766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.366916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.366932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.367090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.367107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.367328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.367344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.367567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.367583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.367746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.367766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.367991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.368011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.368231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.368247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.368320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.368336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.368470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.368486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.368701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.368717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.368878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.368895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.368982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.369002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.369245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.369262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.369447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.369463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.369628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.369643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.369890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.369907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.370079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.370096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.370338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.370353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.370575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.370591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.370894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.370910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.371141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.371158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.371354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.371370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.371623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.371639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.371795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.371812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.372049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.372066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.058 [2024-12-09 05:20:50.372290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.058 [2024-12-09 05:20:50.372306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.058 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.372494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.372510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.372738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.372754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.372907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.372923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.373023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.373041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.373264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.373280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.373436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.373451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.373713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.373729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.373944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.373960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.374200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.374216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.374385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.374401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.374570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.374586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.374775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.374791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.374974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.374990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.375236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.375253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.375517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.375534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.375703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.375718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.375896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.375912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.376152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.376168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.376383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.376402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.376666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.376682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.376907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.376923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.377040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.377057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.377276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.377292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.377533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.377548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.377708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.377724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.377969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.377985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.378095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.378114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.378213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.378229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.378393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.378409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.378561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.378577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.378695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.378712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.378877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.378893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.379123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.379140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.379319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.379335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.379425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.379442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.379606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.379622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.379843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.379859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.059 [2024-12-09 05:20:50.380078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.059 [2024-12-09 05:20:50.380095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.059 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.380265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.380281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.380522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.380538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.380784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.380800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.381041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.381058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.381245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.381261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.381412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.381429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.381647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.381663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.381938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.381955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.382056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.382073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.382262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.382278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.382486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.382503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.382664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.382680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.382896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.382912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.383076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.383093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.383210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.383226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.383380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.383396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.383499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.383515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.383665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.383681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.383851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.383867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.384038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.384055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.384275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.384293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.384468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.384485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.384595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.384611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.384726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.384742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.384839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.384854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.385019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.385036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.385209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.385225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.385495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.385511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.385682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.385699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.385872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.385888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.385981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.386001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.386240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.386257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.386520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.386536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.060 qpair failed and we were unable to recover it. 00:26:14.060 [2024-12-09 05:20:50.386757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.060 [2024-12-09 05:20:50.386773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.386891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.386908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.387130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.387147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.387398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.387414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.387499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.387515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.387631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.387647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.387895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.387912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.388132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.388149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.388383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.388398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.388615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.388631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.388792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.388808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.389051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.389068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.389314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.389331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.389547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.389563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.389813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.389829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.390007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.390023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.390192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.390208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.390378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.390395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.390634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.390650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.390802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.390819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.391056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.391073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.391244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.391260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.391515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.391532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.391771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.391787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.391960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.391976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.392155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.392172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.392330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.392346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.392502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.392521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.392669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.392685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.392847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.392863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.393017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.393034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.393208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.393224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.393409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.393425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.393593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.393610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.393713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.393729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.393893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.393909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.394071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.394088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.394267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.394283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.061 qpair failed and we were unable to recover it. 00:26:14.061 [2024-12-09 05:20:50.394373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.061 [2024-12-09 05:20:50.394390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.394478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.394494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.394665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.394681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.394846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.394862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.394971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.394988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.395152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.395168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.395279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.395295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.395523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.395539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.395711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.395727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.395918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.395934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.396033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.396050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.396266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.396282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.396499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.396515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.396682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.396698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.396854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.396871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.396948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.396964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.397122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.397140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.397233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.397249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.397347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.397363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.397527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.397543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.397694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.397710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.397891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.397908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.398070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.398086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.398268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.398284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:14.062 [2024-12-09 05:20:50.398504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.398523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.398709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.398726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:14.062 [2024-12-09 05:20:50.398848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.398866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.399051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.399068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:14.062 [2024-12-09 05:20:50.399181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.399204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.399316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.399332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.399435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.399453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:14.062 [2024-12-09 05:20:50.399661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.399679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:14.062 [2024-12-09 05:20:50.399849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.399867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.400105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.400121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.400299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.400315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.400469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.400486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.062 qpair failed and we were unable to recover it. 00:26:14.062 [2024-12-09 05:20:50.400733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.062 [2024-12-09 05:20:50.400749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.400992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.401014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.401253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.401270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.401486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.401503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.401614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.401630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.401723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.401740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.401903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.401922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.402094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.402112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.402264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.402280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.402392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.402409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.402509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.402527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.402713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.402730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.402994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.403017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.403168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.403186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.403441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.403458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.403724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.403741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.403893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.403909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.403994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.404022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.404250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.404267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.404500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.404516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.404673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.404690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.404912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.404928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.405083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.405099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.405179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.405196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.405369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.405386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.405550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.405567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.405808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.405827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.406044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.406063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.406185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.406201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.406366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.406383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.406540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.406556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.406741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.406762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.406864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.406879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.407104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.407121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.407317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.407334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.407500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.407516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.407767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.407784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.407895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.407912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.063 [2024-12-09 05:20:50.408164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.063 [2024-12-09 05:20:50.408181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.063 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.408289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.408306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.408487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.408505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.408735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.408752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.408926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.408943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.409054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.409071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.409291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.409309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.409487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.409504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.409663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.409681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.409857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.409874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.409959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.409977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.410180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.410198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.410438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.410455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.410648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.410664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.410833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.410848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.411087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.411104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.411361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.411378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.411480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.411496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.411644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.411660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.411761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.411777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.411883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.411900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.411980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.411996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.412099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.412115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.412292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.412311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.412414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.412430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.412584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.412601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.412821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.412837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.413056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.413073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.413310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.413326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.413495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.413511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.413619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.413634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.413790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.413807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.413968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.413984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.414173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.414193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.414343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.414359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.414524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.414540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.414753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.414769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.415012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.415029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.064 [2024-12-09 05:20:50.415219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.064 [2024-12-09 05:20:50.415234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.064 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.415453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.415469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.415687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.415703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.415897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.415914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.416135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.416152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.416255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.416271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.416512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.416529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.416714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.416732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.417028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.417046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.417216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.417233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.417449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.417466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.417668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.417685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.417905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.417924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.418091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.418108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.418327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.418345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.418432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.418449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.418705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.418721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.418875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.418892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.419062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.419080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.419253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.419269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.419388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.419406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.419565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.419581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.419816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.419833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.420103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.420120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.420348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.420365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.420538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.420555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.420704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.420720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.420961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.420978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.421150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.421167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.421318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.421333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.421572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.421589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.421697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.421715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.421870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.421886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.421991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.422021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.422131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.065 [2024-12-09 05:20:50.422148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.065 qpair failed and we were unable to recover it. 00:26:14.065 [2024-12-09 05:20:50.422257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.422277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.422514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.422530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.422721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.422737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.422858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.422874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.422984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.423007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.423243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.423260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.423377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.423393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.423565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.423583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.423828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.423848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.424046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.424065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.424236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.424253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.424406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.424423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.424638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.424654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.424871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.424888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.425130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.425149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.425318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.425334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.425456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.425473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.425724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.425741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.425847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.425864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.426017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.426033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.426229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.426246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.426408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.426424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.426522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.426539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.426694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.426711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.426871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.426887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.427042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.427060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.427296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.427311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.427534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.427550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.427817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.427834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.428008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.428025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.428209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.428226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.428344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.428360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.428554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.428571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.428748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.428764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.428931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.428948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.429153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.429170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.429344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.429359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.429522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.066 [2024-12-09 05:20:50.429539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.066 qpair failed and we were unable to recover it. 00:26:14.066 [2024-12-09 05:20:50.429689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.429705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.429878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.429894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.430050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.430069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.430320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.430337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.430435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.430451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.430554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.430571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.430723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.430741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.430899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.430915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.431069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.431087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.431191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.431207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.431446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.431462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.431696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.431712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.431962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.431979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.432207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.432225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.432365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.432382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.432602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.432617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.432875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.432891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.432988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.433011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.433170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.433187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.433359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.433375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.433542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.433558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.433816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.433832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.434108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.434125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.434345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.067 [2024-12-09 05:20:50.434363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.434519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.434536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.434705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.434724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:14.067 [2024-12-09 05:20:50.434905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.434922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.435142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.067 [2024-12-09 05:20:50.435161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.435331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.435346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:14.067 [2024-12-09 05:20:50.435537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.435556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.435738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.435755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.435978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.435994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.436244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.436261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.436361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.436378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.436650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.436667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.436876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.067 [2024-12-09 05:20:50.436892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.067 qpair failed and we were unable to recover it. 00:26:14.067 [2024-12-09 05:20:50.437110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.437127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.437297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.437313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.437415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.437431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.437599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.437615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.437832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.437852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.437986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.438008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.438223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.438239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.438421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.438436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.438539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.438556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.438838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.438855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.439022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.439039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.439141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.439158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.439330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.439346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.439566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.439583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.439811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.439828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.439921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.439937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.440116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.440133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.440304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.440320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.440475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.440490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.440586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.440603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.440781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.440797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.440981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.441012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.441191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.441208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.441314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.441331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.441502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.441518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.441764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.441781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.442027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.442045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.442217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.442233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.442450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.442466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.442708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.442726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.442965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.442982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.443165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.443189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.443288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.443301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.443462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.443475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.443699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.443711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.443930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.443943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.444102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.444115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.444231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.068 [2024-12-09 05:20:50.444243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.068 qpair failed and we were unable to recover it. 00:26:14.068 [2024-12-09 05:20:50.444403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.444415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.444570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.444582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.444686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.444698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.444869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.444882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.445094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.445107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.445286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.445297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.445524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.445539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.445747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.445759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.445910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.445922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.446129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.446142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.446390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.446403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.446668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.446681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.446860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.446873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.447038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.447051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.447299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.447312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.447405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.447417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.447603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.447616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.447848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.447861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.448038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.448051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.448140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.448152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.448248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.448260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.448488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.448501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.448690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.448702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.448862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.448874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.449137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.449150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.449331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.449344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.449422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.449435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.449544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.449556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.449635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.449647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.449804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.449817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.449898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.449910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.450123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.450135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.450233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.450246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.450429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.069 [2024-12-09 05:20:50.450442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.069 qpair failed and we were unable to recover it. 00:26:14.069 [2024-12-09 05:20:50.450526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.450539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.450689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.450701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.450918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.450931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.451130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.451143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.451330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.451342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.451445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.451458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.451600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.451612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.451764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.451777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.451981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.451993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.452095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.452107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.452347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.452360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.452517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.452529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.452740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.452755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.453008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.453021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.453129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.453141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.453255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.453267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.453478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.453491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.453731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.453743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.453949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.453961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.454194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.454207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.454442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.454455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.454567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.454580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.454786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.454798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.454954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.454966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.455107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.455120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.455351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.455364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.455471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.455484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.455721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.455733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.455940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.455953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.456039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.456052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.456286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.456298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.456399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.456412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.456553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.456565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.456723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.456736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.456849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.456861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.457048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.457060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.457275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.457287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.457465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.070 [2024-12-09 05:20:50.457477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.070 qpair failed and we were unable to recover it. 00:26:14.070 [2024-12-09 05:20:50.457650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.457663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.457871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.457884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.458027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.458040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.458287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.458300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.458515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.458528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.458692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.458704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.458849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.458861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.459015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.459028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.459119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.459132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.459291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.459303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.459535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.459547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.459794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.459806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.459898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.459911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.460083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.460096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.460332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.460347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.460583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.460595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.460817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.460830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.460919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.460932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.461145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.461157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.461310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.461322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.461483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.461495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.461723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.461735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.461886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.461899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.462145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.462158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.462321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.462333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.462496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.462508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.462705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.462717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.462926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.462938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.463139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.463152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.463268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.463281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.463439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.463452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.463629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.463641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.463903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.463915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.464084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.464097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.464239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.464252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.464345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.464357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.464596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.464608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.464751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.071 [2024-12-09 05:20:50.464763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.071 qpair failed and we were unable to recover it. 00:26:14.071 [2024-12-09 05:20:50.465016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.465030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.465271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.465285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.465515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.465528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.465696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.465716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.465959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.465976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.466251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.466268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.466435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.466451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.466701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.466718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.466829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.466845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.467056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.467073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.467288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.467305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.467544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.467560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.467710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.467726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.467810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.467827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.468065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.468083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.468247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.468263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.468412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.468432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.468526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.468542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.468809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.468826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.468991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.469012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.469199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.469217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.469461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.469478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.469576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.469592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.469801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.469818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.470063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.470080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.470293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.470307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.470474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.470486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.470687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.470700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.470810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.470823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.470989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.471006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.471254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.471268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.471448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.471461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.471607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.471619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.471833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.471846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.472003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.472016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.472251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.472263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.472406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.472419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.472652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.072 [2024-12-09 05:20:50.472665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.072 qpair failed and we were unable to recover it. 00:26:14.072 [2024-12-09 05:20:50.472892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.472904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.473061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.473074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.473276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.473289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.473499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.473512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.473734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.473748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.473909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.473925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.474173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.474187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.474283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.474296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.474464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.474476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.474704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.474716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.474873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.474885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.474985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.475001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.475226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.475238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.475391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.475403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.475577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.475589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.475850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.475863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.476093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.476106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.476277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.476291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.476452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.476464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.476725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.476738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 Malloc0 00:26:14.073 [2024-12-09 05:20:50.476914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.476927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.477071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.477083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.477266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.477287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.477539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.477556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.073 [2024-12-09 05:20:50.477731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.477749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.477915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.477933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:14.073 [2024-12-09 05:20:50.478100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.478118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.073 [2024-12-09 05:20:50.478364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.478384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.478551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.478568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:14.073 [2024-12-09 05:20:50.478806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.478824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.479073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.479092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.479192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.479209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.479361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.479377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.479617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.479634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.479746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.479763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.480045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.480063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.073 qpair failed and we were unable to recover it. 00:26:14.073 [2024-12-09 05:20:50.480217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.073 [2024-12-09 05:20:50.480234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.480400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.480418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.480620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.074 [2024-12-09 05:20:50.480666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.480681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.480882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.480899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.481017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.481038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.481261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.481282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.481394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.481414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.481673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.481697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.481900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.481935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.482234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.482250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.482414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.482430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.482519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.482534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.482776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.482791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.483010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.483025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.483191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.483207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.483347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.483362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.483575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.483591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.483758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.483775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.484006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.484022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.484121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.484135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.484298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.484316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.484468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.484483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.484579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.484595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.484756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.484769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.484916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.484930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.485106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.485121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.485369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.485385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.485550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.485566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.485738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.485753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.486005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.486019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.486108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.486123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.486215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.486230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.486329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.486343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.486507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.074 [2024-12-09 05:20:50.486519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.074 qpair failed and we were unable to recover it. 00:26:14.074 [2024-12-09 05:20:50.486784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.486796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.486894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.486906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.487116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.487128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.487292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.487304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.487535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.487547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.487688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.487700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.487842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.487855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.488013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.488026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.488132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.488144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.488224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.488237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.488325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.488338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.488486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.488499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.488709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.488722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b4000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.488970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.488992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.489164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.489182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.075 [2024-12-09 05:20:50.489278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.489295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.489563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.489580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:14.075 [2024-12-09 05:20:50.489696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.489713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.489893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.489911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.075 [2024-12-09 05:20:50.490009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.490026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.490214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.490231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:14.075 [2024-12-09 05:20:50.490343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.490359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.490531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.490547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.490703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.490720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.490898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.490914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.491030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.491047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.491209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.491226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.491397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.491413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.491515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.491531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.491696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.491713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.491852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.491868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.492035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.492051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.492285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.492302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.492553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.492570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.492765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.492781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.493027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.493043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.493281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.493297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.493515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.493531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.075 [2024-12-09 05:20:50.493650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.075 [2024-12-09 05:20:50.493666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.075 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.493935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.493952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.494065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.494081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.494329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.494346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.494531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.494548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.494737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.494754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.494964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.494981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.495157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.495174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.495439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.495455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.495646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.495662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.495820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.495836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.496020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.496037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.496226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.496242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.496398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.496418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.496508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.496524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.496785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.496801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.497047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.497064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.076 [2024-12-09 05:20:50.497304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.497321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.497557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.497575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:14.076 [2024-12-09 05:20:50.497736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.497753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.497913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.497930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.076 [2024-12-09 05:20:50.498137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.498154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:14.076 [2024-12-09 05:20:50.498400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.498417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.498674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.498690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.498911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.498928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.499161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.499178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.499396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.499412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.499586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.499601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.499783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.499799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.500045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.500062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.500288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.500304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.500604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.500620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.500785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.500801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.500896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.500913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.501129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.501145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.501235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.501251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.501497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.501514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.501759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.501775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96bc000b90 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.502038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.502069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.502260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.502278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.076 qpair failed and we were unable to recover it. 00:26:14.076 [2024-12-09 05:20:50.502445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.076 [2024-12-09 05:20:50.502462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.502550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.502565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.502747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.502763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.502918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.502934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.503081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.503098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.503202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.503219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.503380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.503396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.503566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.503582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.503732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.503748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.503896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.503913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.504006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.504022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.504257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.504274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.504502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.504519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.504683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.504703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.504891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.504908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.505057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.505074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.505239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.505257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.077 [2024-12-09 05:20:50.505478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.505495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.505677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.505693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:14.077 [2024-12-09 05:20:50.505886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.505903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.506002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.506020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.077 [2024-12-09 05:20:50.506183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.506201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.506268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.506284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:14.077 [2024-12-09 05:20:50.506501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.506522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.506685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.506702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.506806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.506823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.506967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.506984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.507082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.507100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.507251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.507267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.507455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.507472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.507578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.507595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.507812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.507828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.508047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.508064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.508287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.508305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.508406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.508423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.508670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.077 [2024-12-09 05:20:50.508687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7fbe0 with addr=10.0.0.2, port=4420 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 [2024-12-09 05:20:50.509138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.077 [2024-12-09 05:20:50.511410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.077 [2024-12-09 05:20:50.511512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.077 [2024-12-09 05:20:50.511535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.077 [2024-12-09 05:20:50.511546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.077 [2024-12-09 05:20:50.511555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.077 [2024-12-09 05:20:50.511581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.077 qpair failed and we were unable to recover it. 00:26:14.077 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.077 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:14.077 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.078 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:14.078 [2024-12-09 05:20:50.521209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.078 [2024-12-09 05:20:50.521306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.078 [2024-12-09 05:20:50.521325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.078 [2024-12-09 05:20:50.521335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.078 [2024-12-09 05:20:50.521343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.078 [2024-12-09 05:20:50.521364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.078 qpair failed and we were unable to recover it. 00:26:14.078 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.078 05:20:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3738388 00:26:14.078 [2024-12-09 05:20:50.531293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.078 [2024-12-09 05:20:50.531369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.078 [2024-12-09 05:20:50.531385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.078 [2024-12-09 05:20:50.531392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.078 [2024-12-09 05:20:50.531399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.078 [2024-12-09 05:20:50.531414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.078 qpair failed and we were unable to recover it. 00:26:14.078 [2024-12-09 05:20:50.541337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.078 [2024-12-09 05:20:50.541419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.078 [2024-12-09 05:20:50.541435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.078 [2024-12-09 05:20:50.541443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.078 [2024-12-09 05:20:50.541453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.078 [2024-12-09 05:20:50.541470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.078 qpair failed and we were unable to recover it. 00:26:14.078 [2024-12-09 05:20:50.551245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.078 [2024-12-09 05:20:50.551324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.078 [2024-12-09 05:20:50.551340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.078 [2024-12-09 05:20:50.551347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.078 [2024-12-09 05:20:50.551353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.078 [2024-12-09 05:20:50.551370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.078 qpair failed and we were unable to recover it. 00:26:14.078 [2024-12-09 05:20:50.561329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.078 [2024-12-09 05:20:50.561433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.078 [2024-12-09 05:20:50.561448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.078 [2024-12-09 05:20:50.561455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.078 [2024-12-09 05:20:50.561461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.078 [2024-12-09 05:20:50.561476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.078 qpair failed and we were unable to recover it. 00:26:14.078 [2024-12-09 05:20:50.571298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.078 [2024-12-09 05:20:50.571368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.078 [2024-12-09 05:20:50.571383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.078 [2024-12-09 05:20:50.571391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.078 [2024-12-09 05:20:50.571397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.078 [2024-12-09 05:20:50.571412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.078 qpair failed and we were unable to recover it. 00:26:14.078 [2024-12-09 05:20:50.581303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.078 [2024-12-09 05:20:50.581389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.078 [2024-12-09 05:20:50.581404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.078 [2024-12-09 05:20:50.581412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.078 [2024-12-09 05:20:50.581418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.078 [2024-12-09 05:20:50.581433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.078 qpair failed and we were unable to recover it. 00:26:14.078 [2024-12-09 05:20:50.591450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.078 [2024-12-09 05:20:50.591546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.078 [2024-12-09 05:20:50.591562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.078 [2024-12-09 05:20:50.591569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.078 [2024-12-09 05:20:50.591575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.078 [2024-12-09 05:20:50.591591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.078 qpair failed and we were unable to recover it. 00:26:14.078 [2024-12-09 05:20:50.601408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.078 [2024-12-09 05:20:50.601475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.078 [2024-12-09 05:20:50.601490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.078 [2024-12-09 05:20:50.601498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.078 [2024-12-09 05:20:50.601504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.078 [2024-12-09 05:20:50.601519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.078 qpair failed and we were unable to recover it. 00:26:14.078 [2024-12-09 05:20:50.611364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.078 [2024-12-09 05:20:50.611455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.078 [2024-12-09 05:20:50.611471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.078 [2024-12-09 05:20:50.611479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.078 [2024-12-09 05:20:50.611485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.078 [2024-12-09 05:20:50.611500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.078 qpair failed and we were unable to recover it. 00:26:14.078 [2024-12-09 05:20:50.621492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.078 [2024-12-09 05:20:50.621571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.078 [2024-12-09 05:20:50.621586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.078 [2024-12-09 05:20:50.621594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.078 [2024-12-09 05:20:50.621600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.078 [2024-12-09 05:20:50.621616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.078 qpair failed and we were unable to recover it. 00:26:14.345 [2024-12-09 05:20:50.631484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.345 [2024-12-09 05:20:50.631553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.345 [2024-12-09 05:20:50.631572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.345 [2024-12-09 05:20:50.631580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.345 [2024-12-09 05:20:50.631587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.345 [2024-12-09 05:20:50.631602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.345 qpair failed and we were unable to recover it. 00:26:14.345 [2024-12-09 05:20:50.641475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.345 [2024-12-09 05:20:50.641548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.345 [2024-12-09 05:20:50.641562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.345 [2024-12-09 05:20:50.641570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.345 [2024-12-09 05:20:50.641576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.345 [2024-12-09 05:20:50.641592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.345 qpair failed and we were unable to recover it. 00:26:14.345 [2024-12-09 05:20:50.651569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.345 [2024-12-09 05:20:50.651661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.345 [2024-12-09 05:20:50.651676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.345 [2024-12-09 05:20:50.651684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.345 [2024-12-09 05:20:50.651690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.345 [2024-12-09 05:20:50.651705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.345 qpair failed and we were unable to recover it. 00:26:14.345 [2024-12-09 05:20:50.661619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.345 [2024-12-09 05:20:50.661713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.345 [2024-12-09 05:20:50.661728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.345 [2024-12-09 05:20:50.661736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.345 [2024-12-09 05:20:50.661742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.345 [2024-12-09 05:20:50.661758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.345 qpair failed and we were unable to recover it. 00:26:14.345 [2024-12-09 05:20:50.671618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.345 [2024-12-09 05:20:50.671694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.345 [2024-12-09 05:20:50.671709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.345 [2024-12-09 05:20:50.671716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.345 [2024-12-09 05:20:50.671725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.345 [2024-12-09 05:20:50.671741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.345 qpair failed and we were unable to recover it. 00:26:14.345 [2024-12-09 05:20:50.681601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.345 [2024-12-09 05:20:50.681693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.345 [2024-12-09 05:20:50.681708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.345 [2024-12-09 05:20:50.681715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.345 [2024-12-09 05:20:50.681722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.345 [2024-12-09 05:20:50.681737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.345 qpair failed and we were unable to recover it. 00:26:14.345 [2024-12-09 05:20:50.691668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.345 [2024-12-09 05:20:50.691780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.345 [2024-12-09 05:20:50.691795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.345 [2024-12-09 05:20:50.691803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.345 [2024-12-09 05:20:50.691809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.345 [2024-12-09 05:20:50.691824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.345 qpair failed and we were unable to recover it. 00:26:14.345 [2024-12-09 05:20:50.701706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.345 [2024-12-09 05:20:50.701784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.345 [2024-12-09 05:20:50.701800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.345 [2024-12-09 05:20:50.701807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.345 [2024-12-09 05:20:50.701813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.345 [2024-12-09 05:20:50.701829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.345 qpair failed and we were unable to recover it. 00:26:14.345 [2024-12-09 05:20:50.711733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.345 [2024-12-09 05:20:50.711842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.346 [2024-12-09 05:20:50.711857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.346 [2024-12-09 05:20:50.711865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.346 [2024-12-09 05:20:50.711871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.346 [2024-12-09 05:20:50.711887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.346 qpair failed and we were unable to recover it. 00:26:14.346 [2024-12-09 05:20:50.721713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.346 [2024-12-09 05:20:50.721785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.346 [2024-12-09 05:20:50.721800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.346 [2024-12-09 05:20:50.721808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.346 [2024-12-09 05:20:50.721814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.346 [2024-12-09 05:20:50.721830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.346 qpair failed and we were unable to recover it. 00:26:14.346 [2024-12-09 05:20:50.731761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.346 [2024-12-09 05:20:50.731832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.346 [2024-12-09 05:20:50.731847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.346 [2024-12-09 05:20:50.731855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.346 [2024-12-09 05:20:50.731861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.346 [2024-12-09 05:20:50.731876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.346 qpair failed and we were unable to recover it. 00:26:14.346 [2024-12-09 05:20:50.741860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.346 [2024-12-09 05:20:50.741954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.346 [2024-12-09 05:20:50.741970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.346 [2024-12-09 05:20:50.741977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.346 [2024-12-09 05:20:50.741984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.346 [2024-12-09 05:20:50.742005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.346 qpair failed and we were unable to recover it. 00:26:14.346 [2024-12-09 05:20:50.751803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.346 [2024-12-09 05:20:50.751875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.346 [2024-12-09 05:20:50.751892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.346 [2024-12-09 05:20:50.751899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.346 [2024-12-09 05:20:50.751906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.346 [2024-12-09 05:20:50.751921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.346 qpair failed and we were unable to recover it. 00:26:14.346 [2024-12-09 05:20:50.761841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.346 [2024-12-09 05:20:50.761914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.346 [2024-12-09 05:20:50.761932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.346 [2024-12-09 05:20:50.761940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.346 [2024-12-09 05:20:50.761947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.346 [2024-12-09 05:20:50.761962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.346 qpair failed and we were unable to recover it. 00:26:14.346 [2024-12-09 05:20:50.771858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.346 [2024-12-09 05:20:50.771925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.346 [2024-12-09 05:20:50.771940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.346 [2024-12-09 05:20:50.771948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.346 [2024-12-09 05:20:50.771954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.346 [2024-12-09 05:20:50.771969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.346 qpair failed and we were unable to recover it. 00:26:14.346 [2024-12-09 05:20:50.781784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.346 [2024-12-09 05:20:50.781850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.346 [2024-12-09 05:20:50.781866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.346 [2024-12-09 05:20:50.781874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.346 [2024-12-09 05:20:50.781880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.346 [2024-12-09 05:20:50.781896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.346 qpair failed and we were unable to recover it. 00:26:14.346 [2024-12-09 05:20:50.791904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.346 [2024-12-09 05:20:50.791977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.346 [2024-12-09 05:20:50.791992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.346 [2024-12-09 05:20:50.792003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.346 [2024-12-09 05:20:50.792010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.346 [2024-12-09 05:20:50.792025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.346 qpair failed and we were unable to recover it. 00:26:14.346 [2024-12-09 05:20:50.801928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.346 [2024-12-09 05:20:50.801993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.346 [2024-12-09 05:20:50.802012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.346 [2024-12-09 05:20:50.802020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.346 [2024-12-09 05:20:50.802029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.346 [2024-12-09 05:20:50.802045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.346 qpair failed and we were unable to recover it. 00:26:14.347 [2024-12-09 05:20:50.811905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.347 [2024-12-09 05:20:50.811962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.347 [2024-12-09 05:20:50.811978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.347 [2024-12-09 05:20:50.811985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.347 [2024-12-09 05:20:50.811991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.347 [2024-12-09 05:20:50.812011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.347 qpair failed and we were unable to recover it. 00:26:14.347 [2024-12-09 05:20:50.821958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.347 [2024-12-09 05:20:50.822025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.347 [2024-12-09 05:20:50.822040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.347 [2024-12-09 05:20:50.822047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.347 [2024-12-09 05:20:50.822054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.347 [2024-12-09 05:20:50.822070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.347 qpair failed and we were unable to recover it. 00:26:14.347 [2024-12-09 05:20:50.831969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.347 [2024-12-09 05:20:50.832039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.347 [2024-12-09 05:20:50.832054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.347 [2024-12-09 05:20:50.832061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.347 [2024-12-09 05:20:50.832067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.347 [2024-12-09 05:20:50.832082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.347 qpair failed and we were unable to recover it. 00:26:14.347 [2024-12-09 05:20:50.842012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.347 [2024-12-09 05:20:50.842073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.347 [2024-12-09 05:20:50.842088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.347 [2024-12-09 05:20:50.842096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.347 [2024-12-09 05:20:50.842102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.347 [2024-12-09 05:20:50.842117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.347 qpair failed and we were unable to recover it. 00:26:14.347 [2024-12-09 05:20:50.851964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.347 [2024-12-09 05:20:50.852038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.347 [2024-12-09 05:20:50.852053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.347 [2024-12-09 05:20:50.852061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.347 [2024-12-09 05:20:50.852067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.347 [2024-12-09 05:20:50.852083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.347 qpair failed and we were unable to recover it. 00:26:14.347 [2024-12-09 05:20:50.862086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.347 [2024-12-09 05:20:50.862192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.347 [2024-12-09 05:20:50.862207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.347 [2024-12-09 05:20:50.862214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.347 [2024-12-09 05:20:50.862221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.347 [2024-12-09 05:20:50.862236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.347 qpair failed and we were unable to recover it. 00:26:14.347 [2024-12-09 05:20:50.872126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.347 [2024-12-09 05:20:50.872192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.347 [2024-12-09 05:20:50.872208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.347 [2024-12-09 05:20:50.872216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.347 [2024-12-09 05:20:50.872222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.347 [2024-12-09 05:20:50.872238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.347 qpair failed and we were unable to recover it. 00:26:14.347 [2024-12-09 05:20:50.882108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.347 [2024-12-09 05:20:50.882169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.347 [2024-12-09 05:20:50.882183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.347 [2024-12-09 05:20:50.882190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.347 [2024-12-09 05:20:50.882197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.347 [2024-12-09 05:20:50.882212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.347 qpair failed and we were unable to recover it. 00:26:14.347 [2024-12-09 05:20:50.892128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.347 [2024-12-09 05:20:50.892199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.347 [2024-12-09 05:20:50.892216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.347 [2024-12-09 05:20:50.892224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.347 [2024-12-09 05:20:50.892230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.347 [2024-12-09 05:20:50.892245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.347 qpair failed and we were unable to recover it. 00:26:14.347 [2024-12-09 05:20:50.902197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.347 [2024-12-09 05:20:50.902295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.347 [2024-12-09 05:20:50.902310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.347 [2024-12-09 05:20:50.902317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.347 [2024-12-09 05:20:50.902323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.347 [2024-12-09 05:20:50.902339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.348 qpair failed and we were unable to recover it. 00:26:14.348 [2024-12-09 05:20:50.912207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.348 [2024-12-09 05:20:50.912269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.348 [2024-12-09 05:20:50.912284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.348 [2024-12-09 05:20:50.912291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.348 [2024-12-09 05:20:50.912297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.348 [2024-12-09 05:20:50.912312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.348 qpair failed and we were unable to recover it. 00:26:14.348 [2024-12-09 05:20:50.922231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.348 [2024-12-09 05:20:50.922294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.348 [2024-12-09 05:20:50.922309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.348 [2024-12-09 05:20:50.922316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.348 [2024-12-09 05:20:50.922323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.348 [2024-12-09 05:20:50.922338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.348 qpair failed and we were unable to recover it. 00:26:14.348 [2024-12-09 05:20:50.932303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.348 [2024-12-09 05:20:50.932364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.348 [2024-12-09 05:20:50.932378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.348 [2024-12-09 05:20:50.932386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.348 [2024-12-09 05:20:50.932395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.348 [2024-12-09 05:20:50.932410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.348 qpair failed and we were unable to recover it. 00:26:14.348 [2024-12-09 05:20:50.942338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.348 [2024-12-09 05:20:50.942400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.348 [2024-12-09 05:20:50.942416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.348 [2024-12-09 05:20:50.942423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.348 [2024-12-09 05:20:50.942430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.348 [2024-12-09 05:20:50.942446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.348 qpair failed and we were unable to recover it. 00:26:14.348 [2024-12-09 05:20:50.952318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.348 [2024-12-09 05:20:50.952380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.348 [2024-12-09 05:20:50.952394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.348 [2024-12-09 05:20:50.952402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.348 [2024-12-09 05:20:50.952408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.348 [2024-12-09 05:20:50.952423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.348 qpair failed and we were unable to recover it. 00:26:14.348 [2024-12-09 05:20:50.962321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.348 [2024-12-09 05:20:50.962409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.348 [2024-12-09 05:20:50.962424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.348 [2024-12-09 05:20:50.962431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.348 [2024-12-09 05:20:50.962437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.348 [2024-12-09 05:20:50.962451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.348 qpair failed and we were unable to recover it. 00:26:14.348 [2024-12-09 05:20:50.972379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.348 [2024-12-09 05:20:50.972443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.348 [2024-12-09 05:20:50.972461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.348 [2024-12-09 05:20:50.972468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.348 [2024-12-09 05:20:50.972476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.348 [2024-12-09 05:20:50.972493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.348 qpair failed and we were unable to recover it. 00:26:14.348 [2024-12-09 05:20:50.982417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.348 [2024-12-09 05:20:50.982489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.348 [2024-12-09 05:20:50.982504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.348 [2024-12-09 05:20:50.982512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.348 [2024-12-09 05:20:50.982518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.348 [2024-12-09 05:20:50.982533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.348 qpair failed and we were unable to recover it. 00:26:14.608 [2024-12-09 05:20:50.992432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.608 [2024-12-09 05:20:50.992495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.608 [2024-12-09 05:20:50.992513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.608 [2024-12-09 05:20:50.992520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.608 [2024-12-09 05:20:50.992527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.608 [2024-12-09 05:20:50.992542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.608 qpair failed and we were unable to recover it. 00:26:14.608 [2024-12-09 05:20:51.002444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.608 [2024-12-09 05:20:51.002506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.608 [2024-12-09 05:20:51.002521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.608 [2024-12-09 05:20:51.002529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.608 [2024-12-09 05:20:51.002536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.608 [2024-12-09 05:20:51.002551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.608 qpair failed and we were unable to recover it. 00:26:14.608 [2024-12-09 05:20:51.012470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.608 [2024-12-09 05:20:51.012544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.608 [2024-12-09 05:20:51.012559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.608 [2024-12-09 05:20:51.012566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.608 [2024-12-09 05:20:51.012572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.608 [2024-12-09 05:20:51.012587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.608 qpair failed and we were unable to recover it. 00:26:14.608 [2024-12-09 05:20:51.022558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.608 [2024-12-09 05:20:51.022632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.608 [2024-12-09 05:20:51.022650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.608 [2024-12-09 05:20:51.022658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.608 [2024-12-09 05:20:51.022664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.608 [2024-12-09 05:20:51.022679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.608 qpair failed and we were unable to recover it. 00:26:14.608 [2024-12-09 05:20:51.032480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.608 [2024-12-09 05:20:51.032553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.608 [2024-12-09 05:20:51.032569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.608 [2024-12-09 05:20:51.032577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.608 [2024-12-09 05:20:51.032583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.608 [2024-12-09 05:20:51.032598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.608 qpair failed and we were unable to recover it. 00:26:14.608 [2024-12-09 05:20:51.042562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.608 [2024-12-09 05:20:51.042625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.608 [2024-12-09 05:20:51.042640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.608 [2024-12-09 05:20:51.042648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.608 [2024-12-09 05:20:51.042654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.608 [2024-12-09 05:20:51.042669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.608 qpair failed and we were unable to recover it. 00:26:14.608 [2024-12-09 05:20:51.052586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.608 [2024-12-09 05:20:51.052643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.608 [2024-12-09 05:20:51.052657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.608 [2024-12-09 05:20:51.052665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.608 [2024-12-09 05:20:51.052671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.608 [2024-12-09 05:20:51.052685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.608 qpair failed and we were unable to recover it. 00:26:14.608 [2024-12-09 05:20:51.062624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.608 [2024-12-09 05:20:51.062688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.608 [2024-12-09 05:20:51.062702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.608 [2024-12-09 05:20:51.062709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.608 [2024-12-09 05:20:51.062719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.608 [2024-12-09 05:20:51.062734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.608 qpair failed and we were unable to recover it. 00:26:14.608 [2024-12-09 05:20:51.072654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.609 [2024-12-09 05:20:51.072720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.609 [2024-12-09 05:20:51.072736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.609 [2024-12-09 05:20:51.072743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.609 [2024-12-09 05:20:51.072749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.609 [2024-12-09 05:20:51.072764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.609 qpair failed and we were unable to recover it. 00:26:14.609 [2024-12-09 05:20:51.082759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.609 [2024-12-09 05:20:51.082824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.609 [2024-12-09 05:20:51.082839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.609 [2024-12-09 05:20:51.082846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.609 [2024-12-09 05:20:51.082853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.609 [2024-12-09 05:20:51.082868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.609 qpair failed and we were unable to recover it. 00:26:14.609 [2024-12-09 05:20:51.092715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.609 [2024-12-09 05:20:51.092778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.609 [2024-12-09 05:20:51.092793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.609 [2024-12-09 05:20:51.092801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.609 [2024-12-09 05:20:51.092807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.609 [2024-12-09 05:20:51.092821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.609 qpair failed and we were unable to recover it. 00:26:14.609 [2024-12-09 05:20:51.102746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.609 [2024-12-09 05:20:51.102811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.609 [2024-12-09 05:20:51.102826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.609 [2024-12-09 05:20:51.102833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.609 [2024-12-09 05:20:51.102840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.609 [2024-12-09 05:20:51.102855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.609 qpair failed and we were unable to recover it. 00:26:14.609 [2024-12-09 05:20:51.112697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.609 [2024-12-09 05:20:51.112762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.609 [2024-12-09 05:20:51.112779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.609 [2024-12-09 05:20:51.112787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.609 [2024-12-09 05:20:51.112794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.609 [2024-12-09 05:20:51.112811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.609 qpair failed and we were unable to recover it. 00:26:14.609 [2024-12-09 05:20:51.122822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.609 [2024-12-09 05:20:51.122883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.609 [2024-12-09 05:20:51.122899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.609 [2024-12-09 05:20:51.122906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.609 [2024-12-09 05:20:51.122914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.609 [2024-12-09 05:20:51.122930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.609 qpair failed and we were unable to recover it. 00:26:14.609 [2024-12-09 05:20:51.132820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.609 [2024-12-09 05:20:51.132916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.609 [2024-12-09 05:20:51.132931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.609 [2024-12-09 05:20:51.132938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.609 [2024-12-09 05:20:51.132945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.609 [2024-12-09 05:20:51.132961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.609 qpair failed and we were unable to recover it. 00:26:14.609 [2024-12-09 05:20:51.142851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.609 [2024-12-09 05:20:51.142924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.609 [2024-12-09 05:20:51.142940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.609 [2024-12-09 05:20:51.142947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.609 [2024-12-09 05:20:51.142953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.609 [2024-12-09 05:20:51.142969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.609 qpair failed and we were unable to recover it. 00:26:14.609 [2024-12-09 05:20:51.152874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.609 [2024-12-09 05:20:51.152938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.609 [2024-12-09 05:20:51.152956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.609 [2024-12-09 05:20:51.152963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.609 [2024-12-09 05:20:51.152969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.609 [2024-12-09 05:20:51.152985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.609 qpair failed and we were unable to recover it. 00:26:14.609 [2024-12-09 05:20:51.162902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.609 [2024-12-09 05:20:51.162968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.609 [2024-12-09 05:20:51.162983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.609 [2024-12-09 05:20:51.162990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.609 [2024-12-09 05:20:51.162996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.609 [2024-12-09 05:20:51.163015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.609 qpair failed and we were unable to recover it. 00:26:14.609 [2024-12-09 05:20:51.172954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.609 [2024-12-09 05:20:51.173020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.609 [2024-12-09 05:20:51.173036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.609 [2024-12-09 05:20:51.173043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.609 [2024-12-09 05:20:51.173050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.609 [2024-12-09 05:20:51.173065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.609 qpair failed and we were unable to recover it. 00:26:14.609 [2024-12-09 05:20:51.182978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.609 [2024-12-09 05:20:51.183048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.609 [2024-12-09 05:20:51.183063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.609 [2024-12-09 05:20:51.183070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.609 [2024-12-09 05:20:51.183077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.609 [2024-12-09 05:20:51.183092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.609 qpair failed and we were unable to recover it. 00:26:14.610 [2024-12-09 05:20:51.193007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.610 [2024-12-09 05:20:51.193071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.610 [2024-12-09 05:20:51.193086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.610 [2024-12-09 05:20:51.193094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.610 [2024-12-09 05:20:51.193104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.610 [2024-12-09 05:20:51.193120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.610 qpair failed and we were unable to recover it. 00:26:14.610 [2024-12-09 05:20:51.203027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.610 [2024-12-09 05:20:51.203090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.610 [2024-12-09 05:20:51.203107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.610 [2024-12-09 05:20:51.203115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.610 [2024-12-09 05:20:51.203122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.610 [2024-12-09 05:20:51.203138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.610 qpair failed and we were unable to recover it. 00:26:14.610 [2024-12-09 05:20:51.213079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.610 [2024-12-09 05:20:51.213141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.610 [2024-12-09 05:20:51.213157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.610 [2024-12-09 05:20:51.213164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.610 [2024-12-09 05:20:51.213171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.610 [2024-12-09 05:20:51.213187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.610 qpair failed and we were unable to recover it. 00:26:14.610 [2024-12-09 05:20:51.223088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.610 [2024-12-09 05:20:51.223196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.610 [2024-12-09 05:20:51.223211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.610 [2024-12-09 05:20:51.223219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.610 [2024-12-09 05:20:51.223226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.610 [2024-12-09 05:20:51.223242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.610 qpair failed and we were unable to recover it. 00:26:14.610 [2024-12-09 05:20:51.233080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.610 [2024-12-09 05:20:51.233172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.610 [2024-12-09 05:20:51.233187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.610 [2024-12-09 05:20:51.233194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.610 [2024-12-09 05:20:51.233200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.610 [2024-12-09 05:20:51.233217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.610 qpair failed and we were unable to recover it. 00:26:14.610 [2024-12-09 05:20:51.243134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.610 [2024-12-09 05:20:51.243195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.610 [2024-12-09 05:20:51.243210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.610 [2024-12-09 05:20:51.243218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.610 [2024-12-09 05:20:51.243224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.610 [2024-12-09 05:20:51.243239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.610 qpair failed and we were unable to recover it. 00:26:14.869 [2024-12-09 05:20:51.253173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.870 [2024-12-09 05:20:51.253235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.870 [2024-12-09 05:20:51.253250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.870 [2024-12-09 05:20:51.253258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.870 [2024-12-09 05:20:51.253264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.870 [2024-12-09 05:20:51.253279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.870 qpair failed and we were unable to recover it. 00:26:14.870 [2024-12-09 05:20:51.263209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.870 [2024-12-09 05:20:51.263273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.870 [2024-12-09 05:20:51.263289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.870 [2024-12-09 05:20:51.263297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.870 [2024-12-09 05:20:51.263303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.870 [2024-12-09 05:20:51.263318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.870 qpair failed and we were unable to recover it. 00:26:14.870 [2024-12-09 05:20:51.273246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.870 [2024-12-09 05:20:51.273310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.870 [2024-12-09 05:20:51.273325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.870 [2024-12-09 05:20:51.273333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.870 [2024-12-09 05:20:51.273339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.870 [2024-12-09 05:20:51.273355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.870 qpair failed and we were unable to recover it. 00:26:14.870 [2024-12-09 05:20:51.283273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.870 [2024-12-09 05:20:51.283333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.870 [2024-12-09 05:20:51.283351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.870 [2024-12-09 05:20:51.283359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.870 [2024-12-09 05:20:51.283365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.870 [2024-12-09 05:20:51.283380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.870 qpair failed and we were unable to recover it. 00:26:14.870 [2024-12-09 05:20:51.293422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.870 [2024-12-09 05:20:51.293495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.870 [2024-12-09 05:20:51.293511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.870 [2024-12-09 05:20:51.293518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.870 [2024-12-09 05:20:51.293524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.870 [2024-12-09 05:20:51.293540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.870 qpair failed and we were unable to recover it. 00:26:14.870 [2024-12-09 05:20:51.303394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.870 [2024-12-09 05:20:51.303458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.870 [2024-12-09 05:20:51.303472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.870 [2024-12-09 05:20:51.303480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.870 [2024-12-09 05:20:51.303486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.870 [2024-12-09 05:20:51.303502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.870 qpair failed and we were unable to recover it. 00:26:14.870 [2024-12-09 05:20:51.313448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.870 [2024-12-09 05:20:51.313556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.870 [2024-12-09 05:20:51.313571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.870 [2024-12-09 05:20:51.313579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.870 [2024-12-09 05:20:51.313586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.870 [2024-12-09 05:20:51.313601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.870 qpair failed and we were unable to recover it. 00:26:14.870 [2024-12-09 05:20:51.323410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.870 [2024-12-09 05:20:51.323474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.870 [2024-12-09 05:20:51.323490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.870 [2024-12-09 05:20:51.323501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.870 [2024-12-09 05:20:51.323508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.870 [2024-12-09 05:20:51.323523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.870 qpair failed and we were unable to recover it. 00:26:14.870 [2024-12-09 05:20:51.333392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.870 [2024-12-09 05:20:51.333457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.870 [2024-12-09 05:20:51.333472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.870 [2024-12-09 05:20:51.333480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.870 [2024-12-09 05:20:51.333487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.870 [2024-12-09 05:20:51.333502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.870 qpair failed and we were unable to recover it. 00:26:14.870 [2024-12-09 05:20:51.343406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.870 [2024-12-09 05:20:51.343488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.870 [2024-12-09 05:20:51.343503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.870 [2024-12-09 05:20:51.343511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.870 [2024-12-09 05:20:51.343517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.870 [2024-12-09 05:20:51.343532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.870 qpair failed and we were unable to recover it. 00:26:14.870 [2024-12-09 05:20:51.353452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.870 [2024-12-09 05:20:51.353515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.870 [2024-12-09 05:20:51.353531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.870 [2024-12-09 05:20:51.353539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.870 [2024-12-09 05:20:51.353546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.870 [2024-12-09 05:20:51.353563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.870 qpair failed and we were unable to recover it. 00:26:14.870 [2024-12-09 05:20:51.363477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.870 [2024-12-09 05:20:51.363572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.870 [2024-12-09 05:20:51.363586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.870 [2024-12-09 05:20:51.363594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.870 [2024-12-09 05:20:51.363600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.870 [2024-12-09 05:20:51.363615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.870 qpair failed and we were unable to recover it. 00:26:14.870 [2024-12-09 05:20:51.373454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.870 [2024-12-09 05:20:51.373521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.870 [2024-12-09 05:20:51.373536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.870 [2024-12-09 05:20:51.373543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.870 [2024-12-09 05:20:51.373550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.870 [2024-12-09 05:20:51.373565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.870 qpair failed and we were unable to recover it. 00:26:14.871 [2024-12-09 05:20:51.383548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.871 [2024-12-09 05:20:51.383631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.871 [2024-12-09 05:20:51.383646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.871 [2024-12-09 05:20:51.383653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.871 [2024-12-09 05:20:51.383660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.871 [2024-12-09 05:20:51.383675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.871 qpair failed and we were unable to recover it. 00:26:14.871 [2024-12-09 05:20:51.393499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.871 [2024-12-09 05:20:51.393567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.871 [2024-12-09 05:20:51.393583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.871 [2024-12-09 05:20:51.393590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.871 [2024-12-09 05:20:51.393596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.871 [2024-12-09 05:20:51.393612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.871 qpair failed and we were unable to recover it. 00:26:14.871 [2024-12-09 05:20:51.403523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.871 [2024-12-09 05:20:51.403584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.871 [2024-12-09 05:20:51.403598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.871 [2024-12-09 05:20:51.403606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.871 [2024-12-09 05:20:51.403612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.871 [2024-12-09 05:20:51.403627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.871 qpair failed and we were unable to recover it. 00:26:14.871 [2024-12-09 05:20:51.413557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.871 [2024-12-09 05:20:51.413618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.871 [2024-12-09 05:20:51.413635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.871 [2024-12-09 05:20:51.413643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.871 [2024-12-09 05:20:51.413649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.871 [2024-12-09 05:20:51.413663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.871 qpair failed and we were unable to recover it. 00:26:14.871 [2024-12-09 05:20:51.423632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.871 [2024-12-09 05:20:51.423697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.871 [2024-12-09 05:20:51.423713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.871 [2024-12-09 05:20:51.423720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.871 [2024-12-09 05:20:51.423727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.871 [2024-12-09 05:20:51.423742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.871 qpair failed and we were unable to recover it. 00:26:14.871 [2024-12-09 05:20:51.433676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.871 [2024-12-09 05:20:51.433739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.871 [2024-12-09 05:20:51.433753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.871 [2024-12-09 05:20:51.433761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.871 [2024-12-09 05:20:51.433767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.871 [2024-12-09 05:20:51.433781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.871 qpair failed and we were unable to recover it. 00:26:14.871 [2024-12-09 05:20:51.443708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.871 [2024-12-09 05:20:51.443767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.871 [2024-12-09 05:20:51.443782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.871 [2024-12-09 05:20:51.443790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.871 [2024-12-09 05:20:51.443796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.871 [2024-12-09 05:20:51.443812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.871 qpair failed and we were unable to recover it. 00:26:14.871 [2024-12-09 05:20:51.453755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.871 [2024-12-09 05:20:51.453822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.871 [2024-12-09 05:20:51.453837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.871 [2024-12-09 05:20:51.453848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.871 [2024-12-09 05:20:51.453855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.871 [2024-12-09 05:20:51.453870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.871 qpair failed and we were unable to recover it. 00:26:14.871 [2024-12-09 05:20:51.463777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.871 [2024-12-09 05:20:51.463841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.871 [2024-12-09 05:20:51.463859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.871 [2024-12-09 05:20:51.463868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.871 [2024-12-09 05:20:51.463877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.871 [2024-12-09 05:20:51.463893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.871 qpair failed and we were unable to recover it. 00:26:14.871 [2024-12-09 05:20:51.473848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.871 [2024-12-09 05:20:51.473913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.871 [2024-12-09 05:20:51.473927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.871 [2024-12-09 05:20:51.473935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.871 [2024-12-09 05:20:51.473941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.871 [2024-12-09 05:20:51.473956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.871 qpair failed and we were unable to recover it. 00:26:14.871 [2024-12-09 05:20:51.483831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.871 [2024-12-09 05:20:51.483918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.871 [2024-12-09 05:20:51.483934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.871 [2024-12-09 05:20:51.483941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.871 [2024-12-09 05:20:51.483948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.871 [2024-12-09 05:20:51.483963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.871 qpair failed and we were unable to recover it. 00:26:14.871 [2024-12-09 05:20:51.493858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.871 [2024-12-09 05:20:51.493925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.871 [2024-12-09 05:20:51.493940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.871 [2024-12-09 05:20:51.493947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.871 [2024-12-09 05:20:51.493953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.871 [2024-12-09 05:20:51.493968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.871 qpair failed and we were unable to recover it. 00:26:14.871 [2024-12-09 05:20:51.503980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.871 [2024-12-09 05:20:51.504063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.871 [2024-12-09 05:20:51.504079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.871 [2024-12-09 05:20:51.504087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.871 [2024-12-09 05:20:51.504093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:14.872 [2024-12-09 05:20:51.504108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.872 qpair failed and we were unable to recover it. 00:26:15.131 [2024-12-09 05:20:51.513929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.131 [2024-12-09 05:20:51.513994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.131 [2024-12-09 05:20:51.514016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.131 [2024-12-09 05:20:51.514024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.131 [2024-12-09 05:20:51.514031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.131 [2024-12-09 05:20:51.514047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.131 qpair failed and we were unable to recover it. 00:26:15.131 [2024-12-09 05:20:51.523882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.131 [2024-12-09 05:20:51.523948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.131 [2024-12-09 05:20:51.523963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.131 [2024-12-09 05:20:51.523970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.131 [2024-12-09 05:20:51.523977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.131 [2024-12-09 05:20:51.523992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.131 qpair failed and we were unable to recover it. 00:26:15.131 [2024-12-09 05:20:51.533965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.131 [2024-12-09 05:20:51.534041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.131 [2024-12-09 05:20:51.534057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.131 [2024-12-09 05:20:51.534064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.131 [2024-12-09 05:20:51.534070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.131 [2024-12-09 05:20:51.534086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.131 qpair failed and we were unable to recover it. 00:26:15.131 [2024-12-09 05:20:51.544023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.132 [2024-12-09 05:20:51.544084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.132 [2024-12-09 05:20:51.544102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.132 [2024-12-09 05:20:51.544109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.132 [2024-12-09 05:20:51.544116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.132 [2024-12-09 05:20:51.544132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.132 qpair failed and we were unable to recover it. 00:26:15.132 [2024-12-09 05:20:51.553987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.132 [2024-12-09 05:20:51.554055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.132 [2024-12-09 05:20:51.554070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.132 [2024-12-09 05:20:51.554077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.132 [2024-12-09 05:20:51.554084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.132 [2024-12-09 05:20:51.554099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.132 qpair failed and we were unable to recover it. 00:26:15.132 [2024-12-09 05:20:51.564065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.132 [2024-12-09 05:20:51.564126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.132 [2024-12-09 05:20:51.564141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.132 [2024-12-09 05:20:51.564148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.132 [2024-12-09 05:20:51.564155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.132 [2024-12-09 05:20:51.564170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.132 qpair failed and we were unable to recover it. 00:26:15.132 [2024-12-09 05:20:51.574026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.132 [2024-12-09 05:20:51.574089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.132 [2024-12-09 05:20:51.574103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.132 [2024-12-09 05:20:51.574111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.132 [2024-12-09 05:20:51.574117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.132 [2024-12-09 05:20:51.574133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.132 qpair failed and we were unable to recover it. 00:26:15.132 [2024-12-09 05:20:51.584167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.132 [2024-12-09 05:20:51.584275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.132 [2024-12-09 05:20:51.584290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.132 [2024-12-09 05:20:51.584301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.132 [2024-12-09 05:20:51.584308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.132 [2024-12-09 05:20:51.584324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.132 qpair failed and we were unable to recover it. 00:26:15.132 [2024-12-09 05:20:51.594188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.132 [2024-12-09 05:20:51.594288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.132 [2024-12-09 05:20:51.594303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.132 [2024-12-09 05:20:51.594310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.132 [2024-12-09 05:20:51.594317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.132 [2024-12-09 05:20:51.594332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.132 qpair failed and we were unable to recover it. 00:26:15.132 [2024-12-09 05:20:51.604156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.132 [2024-12-09 05:20:51.604226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.132 [2024-12-09 05:20:51.604241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.132 [2024-12-09 05:20:51.604248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.132 [2024-12-09 05:20:51.604254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.132 [2024-12-09 05:20:51.604269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.132 qpair failed and we were unable to recover it. 00:26:15.132 [2024-12-09 05:20:51.614217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.132 [2024-12-09 05:20:51.614280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.132 [2024-12-09 05:20:51.614294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.132 [2024-12-09 05:20:51.614302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.132 [2024-12-09 05:20:51.614309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.132 [2024-12-09 05:20:51.614324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.132 qpair failed and we were unable to recover it. 00:26:15.132 [2024-12-09 05:20:51.624255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.132 [2024-12-09 05:20:51.624317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.132 [2024-12-09 05:20:51.624332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.132 [2024-12-09 05:20:51.624340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.132 [2024-12-09 05:20:51.624347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.132 [2024-12-09 05:20:51.624362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.132 qpair failed and we were unable to recover it. 00:26:15.132 [2024-12-09 05:20:51.634263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.132 [2024-12-09 05:20:51.634323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.132 [2024-12-09 05:20:51.634338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.132 [2024-12-09 05:20:51.634345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.132 [2024-12-09 05:20:51.634352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.132 [2024-12-09 05:20:51.634367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.132 qpair failed and we were unable to recover it. 00:26:15.132 [2024-12-09 05:20:51.644275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.132 [2024-12-09 05:20:51.644355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.132 [2024-12-09 05:20:51.644372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.132 [2024-12-09 05:20:51.644379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.132 [2024-12-09 05:20:51.644386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.132 [2024-12-09 05:20:51.644402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.132 qpair failed and we were unable to recover it. 00:26:15.132 [2024-12-09 05:20:51.654330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.132 [2024-12-09 05:20:51.654396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.132 [2024-12-09 05:20:51.654411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.132 [2024-12-09 05:20:51.654418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.132 [2024-12-09 05:20:51.654425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.132 [2024-12-09 05:20:51.654439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.132 qpair failed and we were unable to recover it. 00:26:15.132 [2024-12-09 05:20:51.664360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.132 [2024-12-09 05:20:51.664424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.132 [2024-12-09 05:20:51.664439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.132 [2024-12-09 05:20:51.664446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.132 [2024-12-09 05:20:51.664453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.132 [2024-12-09 05:20:51.664468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.132 qpair failed and we were unable to recover it. 00:26:15.132 [2024-12-09 05:20:51.674411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.132 [2024-12-09 05:20:51.674499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.133 [2024-12-09 05:20:51.674518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.133 [2024-12-09 05:20:51.674525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.133 [2024-12-09 05:20:51.674531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.133 [2024-12-09 05:20:51.674546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.133 qpair failed and we were unable to recover it. 00:26:15.133 [2024-12-09 05:20:51.684348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.133 [2024-12-09 05:20:51.684414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.133 [2024-12-09 05:20:51.684430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.133 [2024-12-09 05:20:51.684437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.133 [2024-12-09 05:20:51.684443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.133 [2024-12-09 05:20:51.684458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.133 qpair failed and we were unable to recover it. 00:26:15.133 [2024-12-09 05:20:51.694472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.133 [2024-12-09 05:20:51.694547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.133 [2024-12-09 05:20:51.694564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.133 [2024-12-09 05:20:51.694572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.133 [2024-12-09 05:20:51.694578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.133 [2024-12-09 05:20:51.694593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.133 qpair failed and we were unable to recover it. 00:26:15.133 [2024-12-09 05:20:51.704504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.133 [2024-12-09 05:20:51.704567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.133 [2024-12-09 05:20:51.704582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.133 [2024-12-09 05:20:51.704590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.133 [2024-12-09 05:20:51.704596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.133 [2024-12-09 05:20:51.704611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.133 qpair failed and we were unable to recover it. 00:26:15.133 [2024-12-09 05:20:51.714442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.133 [2024-12-09 05:20:51.714506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.133 [2024-12-09 05:20:51.714521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.133 [2024-12-09 05:20:51.714531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.133 [2024-12-09 05:20:51.714538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.133 [2024-12-09 05:20:51.714552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.133 qpair failed and we were unable to recover it. 00:26:15.133 [2024-12-09 05:20:51.724502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.133 [2024-12-09 05:20:51.724563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.133 [2024-12-09 05:20:51.724578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.133 [2024-12-09 05:20:51.724585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.133 [2024-12-09 05:20:51.724592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.133 [2024-12-09 05:20:51.724607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.133 qpair failed and we were unable to recover it. 00:26:15.133 [2024-12-09 05:20:51.734543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.133 [2024-12-09 05:20:51.734609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.133 [2024-12-09 05:20:51.734624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.133 [2024-12-09 05:20:51.734632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.133 [2024-12-09 05:20:51.734638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.133 [2024-12-09 05:20:51.734654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.133 qpair failed and we were unable to recover it. 00:26:15.133 [2024-12-09 05:20:51.744598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.133 [2024-12-09 05:20:51.744663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.133 [2024-12-09 05:20:51.744678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.133 [2024-12-09 05:20:51.744685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.133 [2024-12-09 05:20:51.744692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.133 [2024-12-09 05:20:51.744707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.133 qpair failed and we were unable to recover it. 00:26:15.133 [2024-12-09 05:20:51.754581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.133 [2024-12-09 05:20:51.754680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.133 [2024-12-09 05:20:51.754695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.133 [2024-12-09 05:20:51.754702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.133 [2024-12-09 05:20:51.754709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.133 [2024-12-09 05:20:51.754724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.133 qpair failed and we were unable to recover it. 00:26:15.133 [2024-12-09 05:20:51.764657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.133 [2024-12-09 05:20:51.764721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.133 [2024-12-09 05:20:51.764736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.133 [2024-12-09 05:20:51.764743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.133 [2024-12-09 05:20:51.764749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.133 [2024-12-09 05:20:51.764763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.133 qpair failed and we were unable to recover it. 00:26:15.393 [2024-12-09 05:20:51.774665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.393 [2024-12-09 05:20:51.774722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.393 [2024-12-09 05:20:51.774739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.393 [2024-12-09 05:20:51.774747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.393 [2024-12-09 05:20:51.774754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.393 [2024-12-09 05:20:51.774770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.393 qpair failed and we were unable to recover it. 00:26:15.393 [2024-12-09 05:20:51.784686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.393 [2024-12-09 05:20:51.784798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.393 [2024-12-09 05:20:51.784813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.393 [2024-12-09 05:20:51.784821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.393 [2024-12-09 05:20:51.784828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.393 [2024-12-09 05:20:51.784844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.393 qpair failed and we were unable to recover it. 00:26:15.393 [2024-12-09 05:20:51.794804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.393 [2024-12-09 05:20:51.794885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.393 [2024-12-09 05:20:51.794900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.393 [2024-12-09 05:20:51.794907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.393 [2024-12-09 05:20:51.794913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.393 [2024-12-09 05:20:51.794928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.393 qpair failed and we were unable to recover it. 00:26:15.393 [2024-12-09 05:20:51.804782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.393 [2024-12-09 05:20:51.804846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.393 [2024-12-09 05:20:51.804860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.393 [2024-12-09 05:20:51.804868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.393 [2024-12-09 05:20:51.804874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.393 [2024-12-09 05:20:51.804889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.393 qpair failed and we were unable to recover it. 00:26:15.393 [2024-12-09 05:20:51.814754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.393 [2024-12-09 05:20:51.814825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.393 [2024-12-09 05:20:51.814840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.393 [2024-12-09 05:20:51.814848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.393 [2024-12-09 05:20:51.814854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.393 [2024-12-09 05:20:51.814868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.393 qpair failed and we were unable to recover it. 00:26:15.393 [2024-12-09 05:20:51.824787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.393 [2024-12-09 05:20:51.824853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.393 [2024-12-09 05:20:51.824869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.393 [2024-12-09 05:20:51.824877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.393 [2024-12-09 05:20:51.824883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.393 [2024-12-09 05:20:51.824898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.393 qpair failed and we were unable to recover it. 00:26:15.393 [2024-12-09 05:20:51.834826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.393 [2024-12-09 05:20:51.834891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.393 [2024-12-09 05:20:51.834907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.393 [2024-12-09 05:20:51.834915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.393 [2024-12-09 05:20:51.834921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.393 [2024-12-09 05:20:51.834936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.393 qpair failed and we were unable to recover it. 00:26:15.393 [2024-12-09 05:20:51.844852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.394 [2024-12-09 05:20:51.844908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.394 [2024-12-09 05:20:51.844923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.394 [2024-12-09 05:20:51.844933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.394 [2024-12-09 05:20:51.844940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.394 [2024-12-09 05:20:51.844956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.394 qpair failed and we were unable to recover it. 00:26:15.394 [2024-12-09 05:20:51.854869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.394 [2024-12-09 05:20:51.854924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.394 [2024-12-09 05:20:51.854939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.394 [2024-12-09 05:20:51.854947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.394 [2024-12-09 05:20:51.854954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.394 [2024-12-09 05:20:51.854968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.394 qpair failed and we were unable to recover it. 00:26:15.394 [2024-12-09 05:20:51.864883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.394 [2024-12-09 05:20:51.864951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.394 [2024-12-09 05:20:51.864966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.394 [2024-12-09 05:20:51.864973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.394 [2024-12-09 05:20:51.864980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.394 [2024-12-09 05:20:51.864994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.394 qpair failed and we were unable to recover it. 00:26:15.394 [2024-12-09 05:20:51.874986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.394 [2024-12-09 05:20:51.875071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.394 [2024-12-09 05:20:51.875087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.394 [2024-12-09 05:20:51.875094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.394 [2024-12-09 05:20:51.875100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.394 [2024-12-09 05:20:51.875115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.394 qpair failed and we were unable to recover it. 00:26:15.394 [2024-12-09 05:20:51.884989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.394 [2024-12-09 05:20:51.885095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.394 [2024-12-09 05:20:51.885111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.394 [2024-12-09 05:20:51.885118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.394 [2024-12-09 05:20:51.885125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.394 [2024-12-09 05:20:51.885141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.394 qpair failed and we were unable to recover it. 00:26:15.394 [2024-12-09 05:20:51.894988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.394 [2024-12-09 05:20:51.895052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.394 [2024-12-09 05:20:51.895067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.394 [2024-12-09 05:20:51.895074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.394 [2024-12-09 05:20:51.895081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.394 [2024-12-09 05:20:51.895095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.394 qpair failed and we were unable to recover it. 00:26:15.394 [2024-12-09 05:20:51.905038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.394 [2024-12-09 05:20:51.905103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.394 [2024-12-09 05:20:51.905118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.394 [2024-12-09 05:20:51.905126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.394 [2024-12-09 05:20:51.905132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.394 [2024-12-09 05:20:51.905148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.394 qpair failed and we were unable to recover it. 00:26:15.394 [2024-12-09 05:20:51.915096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.394 [2024-12-09 05:20:51.915208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.394 [2024-12-09 05:20:51.915222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.394 [2024-12-09 05:20:51.915230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.394 [2024-12-09 05:20:51.915236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.394 [2024-12-09 05:20:51.915252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.394 qpair failed and we were unable to recover it. 00:26:15.394 [2024-12-09 05:20:51.925110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.394 [2024-12-09 05:20:51.925170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.394 [2024-12-09 05:20:51.925184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.394 [2024-12-09 05:20:51.925192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.394 [2024-12-09 05:20:51.925199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.394 [2024-12-09 05:20:51.925214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.394 qpair failed and we were unable to recover it. 00:26:15.394 [2024-12-09 05:20:51.935094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.394 [2024-12-09 05:20:51.935158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.394 [2024-12-09 05:20:51.935173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.394 [2024-12-09 05:20:51.935180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.394 [2024-12-09 05:20:51.935186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.394 [2024-12-09 05:20:51.935201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.394 qpair failed and we were unable to recover it. 00:26:15.394 [2024-12-09 05:20:51.945184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.394 [2024-12-09 05:20:51.945249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.394 [2024-12-09 05:20:51.945263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.394 [2024-12-09 05:20:51.945270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.394 [2024-12-09 05:20:51.945276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.394 [2024-12-09 05:20:51.945292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.394 qpair failed and we were unable to recover it. 00:26:15.394 [2024-12-09 05:20:51.955187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.394 [2024-12-09 05:20:51.955260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.394 [2024-12-09 05:20:51.955277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.394 [2024-12-09 05:20:51.955289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.394 [2024-12-09 05:20:51.955298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.394 [2024-12-09 05:20:51.955314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.394 qpair failed and we were unable to recover it. 00:26:15.394 [2024-12-09 05:20:51.965128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.394 [2024-12-09 05:20:51.965187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.394 [2024-12-09 05:20:51.965202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.394 [2024-12-09 05:20:51.965209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.394 [2024-12-09 05:20:51.965215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.394 [2024-12-09 05:20:51.965230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.394 qpair failed and we were unable to recover it. 00:26:15.394 [2024-12-09 05:20:51.975207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.394 [2024-12-09 05:20:51.975267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.394 [2024-12-09 05:20:51.975282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.394 [2024-12-09 05:20:51.975293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.394 [2024-12-09 05:20:51.975300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.394 [2024-12-09 05:20:51.975315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.394 qpair failed and we were unable to recover it. 00:26:15.394 [2024-12-09 05:20:51.985288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.394 [2024-12-09 05:20:51.985396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.394 [2024-12-09 05:20:51.985411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.394 [2024-12-09 05:20:51.985419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.394 [2024-12-09 05:20:51.985426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.394 [2024-12-09 05:20:51.985442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.394 qpair failed and we were unable to recover it. 00:26:15.395 [2024-12-09 05:20:51.995278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.395 [2024-12-09 05:20:51.995339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.395 [2024-12-09 05:20:51.995354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.395 [2024-12-09 05:20:51.995361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.395 [2024-12-09 05:20:51.995369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.395 [2024-12-09 05:20:51.995386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.395 qpair failed and we were unable to recover it. 00:26:15.395 [2024-12-09 05:20:52.005291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.395 [2024-12-09 05:20:52.005346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.395 [2024-12-09 05:20:52.005361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.395 [2024-12-09 05:20:52.005368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.395 [2024-12-09 05:20:52.005375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.395 [2024-12-09 05:20:52.005390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.395 qpair failed and we were unable to recover it. 00:26:15.395 [2024-12-09 05:20:52.015382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.395 [2024-12-09 05:20:52.015440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.395 [2024-12-09 05:20:52.015454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.395 [2024-12-09 05:20:52.015462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.395 [2024-12-09 05:20:52.015469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.395 [2024-12-09 05:20:52.015487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.395 qpair failed and we were unable to recover it. 00:26:15.395 [2024-12-09 05:20:52.025420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.395 [2024-12-09 05:20:52.025526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.395 [2024-12-09 05:20:52.025541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.395 [2024-12-09 05:20:52.025550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.395 [2024-12-09 05:20:52.025557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.395 [2024-12-09 05:20:52.025572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.395 qpair failed and we were unable to recover it. 00:26:15.395 [2024-12-09 05:20:52.035385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.395 [2024-12-09 05:20:52.035450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.395 [2024-12-09 05:20:52.035465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.395 [2024-12-09 05:20:52.035473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.395 [2024-12-09 05:20:52.035479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.395 [2024-12-09 05:20:52.035494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.395 qpair failed and we were unable to recover it. 00:26:15.654 [2024-12-09 05:20:52.045400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.655 [2024-12-09 05:20:52.045508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.655 [2024-12-09 05:20:52.045523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.655 [2024-12-09 05:20:52.045531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.655 [2024-12-09 05:20:52.045538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.655 [2024-12-09 05:20:52.045554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.655 qpair failed and we were unable to recover it. 00:26:15.655 [2024-12-09 05:20:52.055444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.655 [2024-12-09 05:20:52.055503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.655 [2024-12-09 05:20:52.055517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.655 [2024-12-09 05:20:52.055525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.655 [2024-12-09 05:20:52.055531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.655 [2024-12-09 05:20:52.055546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.655 qpair failed and we were unable to recover it. 00:26:15.655 [2024-12-09 05:20:52.065472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.655 [2024-12-09 05:20:52.065540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.655 [2024-12-09 05:20:52.065554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.655 [2024-12-09 05:20:52.065562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.655 [2024-12-09 05:20:52.065568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.655 [2024-12-09 05:20:52.065583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.655 qpair failed and we were unable to recover it. 00:26:15.655 [2024-12-09 05:20:52.075499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.655 [2024-12-09 05:20:52.075557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.655 [2024-12-09 05:20:52.075572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.655 [2024-12-09 05:20:52.075579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.655 [2024-12-09 05:20:52.075586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.655 [2024-12-09 05:20:52.075601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.655 qpair failed and we were unable to recover it. 00:26:15.655 [2024-12-09 05:20:52.085521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.655 [2024-12-09 05:20:52.085576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.655 [2024-12-09 05:20:52.085591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.655 [2024-12-09 05:20:52.085598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.655 [2024-12-09 05:20:52.085604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.655 [2024-12-09 05:20:52.085620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.655 qpair failed and we were unable to recover it. 00:26:15.655 [2024-12-09 05:20:52.095591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.655 [2024-12-09 05:20:52.095655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.655 [2024-12-09 05:20:52.095670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.655 [2024-12-09 05:20:52.095677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.655 [2024-12-09 05:20:52.095684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.655 [2024-12-09 05:20:52.095699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.655 qpair failed and we were unable to recover it. 00:26:15.655 [2024-12-09 05:20:52.105583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.655 [2024-12-09 05:20:52.105648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.655 [2024-12-09 05:20:52.105663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.655 [2024-12-09 05:20:52.105674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.655 [2024-12-09 05:20:52.105681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.655 [2024-12-09 05:20:52.105696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.655 qpair failed and we were unable to recover it. 00:26:15.655 [2024-12-09 05:20:52.115617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.655 [2024-12-09 05:20:52.115682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.655 [2024-12-09 05:20:52.115696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.655 [2024-12-09 05:20:52.115703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.655 [2024-12-09 05:20:52.115710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.655 [2024-12-09 05:20:52.115725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.655 qpair failed and we were unable to recover it. 00:26:15.655 [2024-12-09 05:20:52.125664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.655 [2024-12-09 05:20:52.125725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.655 [2024-12-09 05:20:52.125740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.655 [2024-12-09 05:20:52.125747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.655 [2024-12-09 05:20:52.125753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.655 [2024-12-09 05:20:52.125768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.655 qpair failed and we were unable to recover it. 00:26:15.655 [2024-12-09 05:20:52.135608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.655 [2024-12-09 05:20:52.135684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.655 [2024-12-09 05:20:52.135699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.655 [2024-12-09 05:20:52.135707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.655 [2024-12-09 05:20:52.135713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.655 [2024-12-09 05:20:52.135728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.655 qpair failed and we were unable to recover it. 00:26:15.655 [2024-12-09 05:20:52.145706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.655 [2024-12-09 05:20:52.145766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.655 [2024-12-09 05:20:52.145781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.655 [2024-12-09 05:20:52.145789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.655 [2024-12-09 05:20:52.145796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.655 [2024-12-09 05:20:52.145816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.655 qpair failed and we were unable to recover it. 00:26:15.655 [2024-12-09 05:20:52.155675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.655 [2024-12-09 05:20:52.155745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.655 [2024-12-09 05:20:52.155760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.655 [2024-12-09 05:20:52.155767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.655 [2024-12-09 05:20:52.155773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.656 [2024-12-09 05:20:52.155789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.656 qpair failed and we were unable to recover it. 00:26:15.656 [2024-12-09 05:20:52.165769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.656 [2024-12-09 05:20:52.165847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.656 [2024-12-09 05:20:52.165862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.656 [2024-12-09 05:20:52.165870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.656 [2024-12-09 05:20:52.165876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.656 [2024-12-09 05:20:52.165891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.656 qpair failed and we were unable to recover it. 00:26:15.656 [2024-12-09 05:20:52.175783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.656 [2024-12-09 05:20:52.175878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.656 [2024-12-09 05:20:52.175893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.656 [2024-12-09 05:20:52.175901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.656 [2024-12-09 05:20:52.175907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.656 [2024-12-09 05:20:52.175923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.656 qpair failed and we were unable to recover it. 00:26:15.656 [2024-12-09 05:20:52.185767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.656 [2024-12-09 05:20:52.185827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.656 [2024-12-09 05:20:52.185843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.656 [2024-12-09 05:20:52.185850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.656 [2024-12-09 05:20:52.185857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.656 [2024-12-09 05:20:52.185872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.656 qpair failed and we were unable to recover it. 00:26:15.656 [2024-12-09 05:20:52.195890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.656 [2024-12-09 05:20:52.196004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.656 [2024-12-09 05:20:52.196020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.656 [2024-12-09 05:20:52.196028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.656 [2024-12-09 05:20:52.196035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.656 [2024-12-09 05:20:52.196051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.656 qpair failed and we were unable to recover it. 00:26:15.656 [2024-12-09 05:20:52.205864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.656 [2024-12-09 05:20:52.205944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.656 [2024-12-09 05:20:52.205959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.656 [2024-12-09 05:20:52.205967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.656 [2024-12-09 05:20:52.205973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.656 [2024-12-09 05:20:52.205987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.656 qpair failed and we were unable to recover it. 00:26:15.656 [2024-12-09 05:20:52.215942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.656 [2024-12-09 05:20:52.216009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.656 [2024-12-09 05:20:52.216026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.656 [2024-12-09 05:20:52.216033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.656 [2024-12-09 05:20:52.216040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.656 [2024-12-09 05:20:52.216055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.656 qpair failed and we were unable to recover it. 00:26:15.656 [2024-12-09 05:20:52.225896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.656 [2024-12-09 05:20:52.225958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.656 [2024-12-09 05:20:52.225974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.656 [2024-12-09 05:20:52.225981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.656 [2024-12-09 05:20:52.225988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.656 [2024-12-09 05:20:52.226007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.656 qpair failed and we were unable to recover it. 00:26:15.656 [2024-12-09 05:20:52.235986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.656 [2024-12-09 05:20:52.236053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.656 [2024-12-09 05:20:52.236068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.656 [2024-12-09 05:20:52.236078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.656 [2024-12-09 05:20:52.236085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.656 [2024-12-09 05:20:52.236101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.656 qpair failed and we were unable to recover it. 00:26:15.656 [2024-12-09 05:20:52.245990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.656 [2024-12-09 05:20:52.246089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.656 [2024-12-09 05:20:52.246105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.656 [2024-12-09 05:20:52.246112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.656 [2024-12-09 05:20:52.246118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.656 [2024-12-09 05:20:52.246133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.656 qpair failed and we were unable to recover it. 00:26:15.656 [2024-12-09 05:20:52.256021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.656 [2024-12-09 05:20:52.256078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.656 [2024-12-09 05:20:52.256093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.656 [2024-12-09 05:20:52.256101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.656 [2024-12-09 05:20:52.256107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.656 [2024-12-09 05:20:52.256123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.656 qpair failed and we were unable to recover it. 00:26:15.656 [2024-12-09 05:20:52.266061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.656 [2024-12-09 05:20:52.266124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.656 [2024-12-09 05:20:52.266138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.656 [2024-12-09 05:20:52.266146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.656 [2024-12-09 05:20:52.266152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.656 [2024-12-09 05:20:52.266167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.656 qpair failed and we were unable to recover it. 00:26:15.656 [2024-12-09 05:20:52.276093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.656 [2024-12-09 05:20:52.276155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.657 [2024-12-09 05:20:52.276171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.657 [2024-12-09 05:20:52.276178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.657 [2024-12-09 05:20:52.276185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.657 [2024-12-09 05:20:52.276203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.657 qpair failed and we were unable to recover it. 00:26:15.657 [2024-12-09 05:20:52.286099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.657 [2024-12-09 05:20:52.286155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.657 [2024-12-09 05:20:52.286171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.657 [2024-12-09 05:20:52.286178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.657 [2024-12-09 05:20:52.286185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.657 [2024-12-09 05:20:52.286200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.657 qpair failed and we were unable to recover it. 00:26:15.657 [2024-12-09 05:20:52.296142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.657 [2024-12-09 05:20:52.296204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.657 [2024-12-09 05:20:52.296219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.657 [2024-12-09 05:20:52.296226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.657 [2024-12-09 05:20:52.296232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.657 [2024-12-09 05:20:52.296248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.657 qpair failed and we were unable to recover it. 00:26:15.917 [2024-12-09 05:20:52.306181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.917 [2024-12-09 05:20:52.306281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.917 [2024-12-09 05:20:52.306295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.917 [2024-12-09 05:20:52.306302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.917 [2024-12-09 05:20:52.306308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.917 [2024-12-09 05:20:52.306324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.917 qpair failed and we were unable to recover it. 00:26:15.917 [2024-12-09 05:20:52.316208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.917 [2024-12-09 05:20:52.316282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.917 [2024-12-09 05:20:52.316297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.917 [2024-12-09 05:20:52.316304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.917 [2024-12-09 05:20:52.316310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.917 [2024-12-09 05:20:52.316325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.917 qpair failed and we were unable to recover it. 00:26:15.917 [2024-12-09 05:20:52.326286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.917 [2024-12-09 05:20:52.326353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.917 [2024-12-09 05:20:52.326368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.917 [2024-12-09 05:20:52.326375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.917 [2024-12-09 05:20:52.326381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.917 [2024-12-09 05:20:52.326397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.917 qpair failed and we were unable to recover it. 00:26:15.917 [2024-12-09 05:20:52.336261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.917 [2024-12-09 05:20:52.336325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.917 [2024-12-09 05:20:52.336340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.917 [2024-12-09 05:20:52.336347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.917 [2024-12-09 05:20:52.336353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.917 [2024-12-09 05:20:52.336368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.917 qpair failed and we were unable to recover it. 00:26:15.917 [2024-12-09 05:20:52.346297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.918 [2024-12-09 05:20:52.346387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.918 [2024-12-09 05:20:52.346402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.918 [2024-12-09 05:20:52.346409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.918 [2024-12-09 05:20:52.346415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.918 [2024-12-09 05:20:52.346429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.918 qpair failed and we were unable to recover it. 00:26:15.918 [2024-12-09 05:20:52.356326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.918 [2024-12-09 05:20:52.356388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.918 [2024-12-09 05:20:52.356403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.918 [2024-12-09 05:20:52.356410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.918 [2024-12-09 05:20:52.356416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.918 [2024-12-09 05:20:52.356431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.918 qpair failed and we were unable to recover it. 00:26:15.918 [2024-12-09 05:20:52.366339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.918 [2024-12-09 05:20:52.366405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.918 [2024-12-09 05:20:52.366419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.918 [2024-12-09 05:20:52.366430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.918 [2024-12-09 05:20:52.366436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.918 [2024-12-09 05:20:52.366450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.918 qpair failed and we were unable to recover it. 00:26:15.918 [2024-12-09 05:20:52.376425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.918 [2024-12-09 05:20:52.376502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.918 [2024-12-09 05:20:52.376517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.918 [2024-12-09 05:20:52.376524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.918 [2024-12-09 05:20:52.376530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.918 [2024-12-09 05:20:52.376545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.918 qpair failed and we were unable to recover it. 00:26:15.918 [2024-12-09 05:20:52.386387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.918 [2024-12-09 05:20:52.386448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.918 [2024-12-09 05:20:52.386463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.918 [2024-12-09 05:20:52.386471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.918 [2024-12-09 05:20:52.386477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.918 [2024-12-09 05:20:52.386492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.918 qpair failed and we were unable to recover it. 00:26:15.918 [2024-12-09 05:20:52.396442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.918 [2024-12-09 05:20:52.396502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.918 [2024-12-09 05:20:52.396516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.918 [2024-12-09 05:20:52.396524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.918 [2024-12-09 05:20:52.396531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.918 [2024-12-09 05:20:52.396546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.918 qpair failed and we were unable to recover it. 00:26:15.918 [2024-12-09 05:20:52.406465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.918 [2024-12-09 05:20:52.406523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.918 [2024-12-09 05:20:52.406536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.918 [2024-12-09 05:20:52.406544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.918 [2024-12-09 05:20:52.406550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.918 [2024-12-09 05:20:52.406567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.918 qpair failed and we were unable to recover it. 00:26:15.918 [2024-12-09 05:20:52.416491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.918 [2024-12-09 05:20:52.416554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.918 [2024-12-09 05:20:52.416570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.918 [2024-12-09 05:20:52.416577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.918 [2024-12-09 05:20:52.416584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.918 [2024-12-09 05:20:52.416600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.918 qpair failed and we were unable to recover it. 00:26:15.918 [2024-12-09 05:20:52.426545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.918 [2024-12-09 05:20:52.426656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.918 [2024-12-09 05:20:52.426671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.918 [2024-12-09 05:20:52.426679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.918 [2024-12-09 05:20:52.426686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.918 [2024-12-09 05:20:52.426701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.918 qpair failed and we were unable to recover it. 00:26:15.918 [2024-12-09 05:20:52.436560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.918 [2024-12-09 05:20:52.436620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.918 [2024-12-09 05:20:52.436635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.918 [2024-12-09 05:20:52.436642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.918 [2024-12-09 05:20:52.436648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.918 [2024-12-09 05:20:52.436664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.918 qpair failed and we were unable to recover it. 00:26:15.918 [2024-12-09 05:20:52.446588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.918 [2024-12-09 05:20:52.446656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.918 [2024-12-09 05:20:52.446671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.918 [2024-12-09 05:20:52.446678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.918 [2024-12-09 05:20:52.446684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.918 [2024-12-09 05:20:52.446700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.918 qpair failed and we were unable to recover it. 00:26:15.918 [2024-12-09 05:20:52.456622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.918 [2024-12-09 05:20:52.456681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.918 [2024-12-09 05:20:52.456696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.918 [2024-12-09 05:20:52.456703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.918 [2024-12-09 05:20:52.456710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.918 [2024-12-09 05:20:52.456724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.918 qpair failed and we were unable to recover it. 00:26:15.918 [2024-12-09 05:20:52.466675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.918 [2024-12-09 05:20:52.466783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.918 [2024-12-09 05:20:52.466798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.918 [2024-12-09 05:20:52.466805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.918 [2024-12-09 05:20:52.466812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.918 [2024-12-09 05:20:52.466827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.918 qpair failed and we were unable to recover it. 00:26:15.919 [2024-12-09 05:20:52.476681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.919 [2024-12-09 05:20:52.476741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.919 [2024-12-09 05:20:52.476756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.919 [2024-12-09 05:20:52.476764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.919 [2024-12-09 05:20:52.476770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.919 [2024-12-09 05:20:52.476786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.919 qpair failed and we were unable to recover it. 00:26:15.919 [2024-12-09 05:20:52.486720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.919 [2024-12-09 05:20:52.486790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.919 [2024-12-09 05:20:52.486804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.919 [2024-12-09 05:20:52.486812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.919 [2024-12-09 05:20:52.486818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.919 [2024-12-09 05:20:52.486833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.919 qpair failed and we were unable to recover it. 00:26:15.919 [2024-12-09 05:20:52.496772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.919 [2024-12-09 05:20:52.496833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.919 [2024-12-09 05:20:52.496848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.919 [2024-12-09 05:20:52.496859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.919 [2024-12-09 05:20:52.496866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.919 [2024-12-09 05:20:52.496880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.919 qpair failed and we were unable to recover it. 00:26:15.919 [2024-12-09 05:20:52.506773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.919 [2024-12-09 05:20:52.506839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.919 [2024-12-09 05:20:52.506854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.919 [2024-12-09 05:20:52.506862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.919 [2024-12-09 05:20:52.506868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.919 [2024-12-09 05:20:52.506883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.919 qpair failed and we were unable to recover it. 00:26:15.919 [2024-12-09 05:20:52.516732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.919 [2024-12-09 05:20:52.516806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.919 [2024-12-09 05:20:52.516823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.919 [2024-12-09 05:20:52.516830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.919 [2024-12-09 05:20:52.516836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.919 [2024-12-09 05:20:52.516852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.919 qpair failed and we were unable to recover it. 00:26:15.919 [2024-12-09 05:20:52.526868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.919 [2024-12-09 05:20:52.526979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.919 [2024-12-09 05:20:52.526995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.919 [2024-12-09 05:20:52.527007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.919 [2024-12-09 05:20:52.527013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.919 [2024-12-09 05:20:52.527028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.919 qpair failed and we were unable to recover it. 00:26:15.919 [2024-12-09 05:20:52.536804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.919 [2024-12-09 05:20:52.536874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.919 [2024-12-09 05:20:52.536889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.919 [2024-12-09 05:20:52.536897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.919 [2024-12-09 05:20:52.536903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.919 [2024-12-09 05:20:52.536922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.919 qpair failed and we were unable to recover it. 00:26:15.919 [2024-12-09 05:20:52.546871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.919 [2024-12-09 05:20:52.546935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.919 [2024-12-09 05:20:52.546949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.919 [2024-12-09 05:20:52.546957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.919 [2024-12-09 05:20:52.546963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.919 [2024-12-09 05:20:52.546980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.919 qpair failed and we were unable to recover it. 00:26:15.919 [2024-12-09 05:20:52.556952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:15.919 [2024-12-09 05:20:52.557022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:15.919 [2024-12-09 05:20:52.557039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:15.919 [2024-12-09 05:20:52.557046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:15.919 [2024-12-09 05:20:52.557052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:15.919 [2024-12-09 05:20:52.557068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.919 qpair failed and we were unable to recover it. 00:26:16.179 [2024-12-09 05:20:52.566939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.179 [2024-12-09 05:20:52.567001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.179 [2024-12-09 05:20:52.567018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.179 [2024-12-09 05:20:52.567025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.179 [2024-12-09 05:20:52.567032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.179 [2024-12-09 05:20:52.567047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.179 qpair failed and we were unable to recover it. 00:26:16.179 [2024-12-09 05:20:52.576952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.179 [2024-12-09 05:20:52.577016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.179 [2024-12-09 05:20:52.577031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.179 [2024-12-09 05:20:52.577038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.179 [2024-12-09 05:20:52.577044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.179 [2024-12-09 05:20:52.577059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.179 qpair failed and we were unable to recover it. 00:26:16.179 [2024-12-09 05:20:52.587011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.179 [2024-12-09 05:20:52.587087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.179 [2024-12-09 05:20:52.587103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.179 [2024-12-09 05:20:52.587110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.179 [2024-12-09 05:20:52.587116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.179 [2024-12-09 05:20:52.587132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.179 qpair failed and we were unable to recover it. 00:26:16.179 [2024-12-09 05:20:52.597034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.179 [2024-12-09 05:20:52.597099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.179 [2024-12-09 05:20:52.597114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.179 [2024-12-09 05:20:52.597122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.179 [2024-12-09 05:20:52.597128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.179 [2024-12-09 05:20:52.597143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.179 qpair failed and we were unable to recover it. 00:26:16.179 [2024-12-09 05:20:52.607035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.179 [2024-12-09 05:20:52.607104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.179 [2024-12-09 05:20:52.607119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.179 [2024-12-09 05:20:52.607126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.179 [2024-12-09 05:20:52.607132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.180 [2024-12-09 05:20:52.607148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.180 qpair failed and we were unable to recover it. 00:26:16.180 [2024-12-09 05:20:52.617076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.180 [2024-12-09 05:20:52.617137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.180 [2024-12-09 05:20:52.617154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.180 [2024-12-09 05:20:52.617161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.180 [2024-12-09 05:20:52.617168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.180 [2024-12-09 05:20:52.617183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.180 qpair failed and we were unable to recover it. 00:26:16.180 [2024-12-09 05:20:52.627130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.180 [2024-12-09 05:20:52.627239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.180 [2024-12-09 05:20:52.627255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.180 [2024-12-09 05:20:52.627265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.180 [2024-12-09 05:20:52.627272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.180 [2024-12-09 05:20:52.627289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.180 qpair failed and we were unable to recover it. 00:26:16.180 [2024-12-09 05:20:52.637108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.180 [2024-12-09 05:20:52.637174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.180 [2024-12-09 05:20:52.637189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.180 [2024-12-09 05:20:52.637196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.180 [2024-12-09 05:20:52.637202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.180 [2024-12-09 05:20:52.637219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.180 qpair failed and we were unable to recover it. 00:26:16.180 [2024-12-09 05:20:52.647150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.180 [2024-12-09 05:20:52.647222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.180 [2024-12-09 05:20:52.647237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.180 [2024-12-09 05:20:52.647244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.180 [2024-12-09 05:20:52.647250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.180 [2024-12-09 05:20:52.647265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.180 qpair failed and we were unable to recover it. 00:26:16.180 [2024-12-09 05:20:52.657128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.180 [2024-12-09 05:20:52.657198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.180 [2024-12-09 05:20:52.657213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.180 [2024-12-09 05:20:52.657220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.180 [2024-12-09 05:20:52.657226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.180 [2024-12-09 05:20:52.657240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.180 qpair failed and we were unable to recover it. 00:26:16.180 [2024-12-09 05:20:52.667242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.180 [2024-12-09 05:20:52.667305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.180 [2024-12-09 05:20:52.667319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.180 [2024-12-09 05:20:52.667326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.180 [2024-12-09 05:20:52.667332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.180 [2024-12-09 05:20:52.667350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.180 qpair failed and we were unable to recover it. 00:26:16.180 [2024-12-09 05:20:52.677291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.180 [2024-12-09 05:20:52.677403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.180 [2024-12-09 05:20:52.677417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.180 [2024-12-09 05:20:52.677426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.180 [2024-12-09 05:20:52.677432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.180 [2024-12-09 05:20:52.677448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.180 qpair failed and we were unable to recover it. 00:26:16.180 [2024-12-09 05:20:52.687212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.180 [2024-12-09 05:20:52.687284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.180 [2024-12-09 05:20:52.687299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.180 [2024-12-09 05:20:52.687306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.180 [2024-12-09 05:20:52.687313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.180 [2024-12-09 05:20:52.687328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.180 qpair failed and we were unable to recover it. 00:26:16.180 [2024-12-09 05:20:52.697342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.180 [2024-12-09 05:20:52.697440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.180 [2024-12-09 05:20:52.697455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.180 [2024-12-09 05:20:52.697462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.180 [2024-12-09 05:20:52.697469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.180 [2024-12-09 05:20:52.697484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.180 qpair failed and we were unable to recover it. 00:26:16.180 [2024-12-09 05:20:52.707341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.180 [2024-12-09 05:20:52.707401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.180 [2024-12-09 05:20:52.707416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.180 [2024-12-09 05:20:52.707423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.180 [2024-12-09 05:20:52.707430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.180 [2024-12-09 05:20:52.707445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.180 qpair failed and we were unable to recover it. 00:26:16.180 [2024-12-09 05:20:52.717411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.180 [2024-12-09 05:20:52.717477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.180 [2024-12-09 05:20:52.717492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.180 [2024-12-09 05:20:52.717499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.180 [2024-12-09 05:20:52.717506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.180 [2024-12-09 05:20:52.717521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.180 qpair failed and we were unable to recover it. 00:26:16.180 [2024-12-09 05:20:52.727399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.180 [2024-12-09 05:20:52.727459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.180 [2024-12-09 05:20:52.727475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.180 [2024-12-09 05:20:52.727482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.180 [2024-12-09 05:20:52.727488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.180 [2024-12-09 05:20:52.727503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.180 qpair failed and we were unable to recover it. 00:26:16.180 [2024-12-09 05:20:52.737352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.180 [2024-12-09 05:20:52.737410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.180 [2024-12-09 05:20:52.737424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.180 [2024-12-09 05:20:52.737432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.180 [2024-12-09 05:20:52.737438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.180 [2024-12-09 05:20:52.737453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.181 qpair failed and we were unable to recover it. 00:26:16.181 [2024-12-09 05:20:52.747462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.181 [2024-12-09 05:20:52.747526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.181 [2024-12-09 05:20:52.747541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.181 [2024-12-09 05:20:52.747548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.181 [2024-12-09 05:20:52.747555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.181 [2024-12-09 05:20:52.747570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.181 qpair failed and we were unable to recover it. 00:26:16.181 [2024-12-09 05:20:52.757497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.181 [2024-12-09 05:20:52.757572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.181 [2024-12-09 05:20:52.757586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.181 [2024-12-09 05:20:52.757597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.181 [2024-12-09 05:20:52.757604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.181 [2024-12-09 05:20:52.757619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.181 qpair failed and we were unable to recover it. 00:26:16.181 [2024-12-09 05:20:52.767496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.181 [2024-12-09 05:20:52.767600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.181 [2024-12-09 05:20:52.767615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.181 [2024-12-09 05:20:52.767623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.181 [2024-12-09 05:20:52.767630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.181 [2024-12-09 05:20:52.767644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.181 qpair failed and we were unable to recover it. 00:26:16.181 [2024-12-09 05:20:52.777531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.181 [2024-12-09 05:20:52.777589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.181 [2024-12-09 05:20:52.777605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.181 [2024-12-09 05:20:52.777612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.181 [2024-12-09 05:20:52.777618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.181 [2024-12-09 05:20:52.777633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.181 qpair failed and we were unable to recover it. 00:26:16.181 [2024-12-09 05:20:52.787562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.181 [2024-12-09 05:20:52.787627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.181 [2024-12-09 05:20:52.787642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.181 [2024-12-09 05:20:52.787650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.181 [2024-12-09 05:20:52.787657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.181 [2024-12-09 05:20:52.787672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.181 qpair failed and we were unable to recover it. 00:26:16.181 [2024-12-09 05:20:52.797597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.181 [2024-12-09 05:20:52.797661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.181 [2024-12-09 05:20:52.797676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.181 [2024-12-09 05:20:52.797684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.181 [2024-12-09 05:20:52.797690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.181 [2024-12-09 05:20:52.797708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.181 qpair failed and we were unable to recover it. 00:26:16.181 [2024-12-09 05:20:52.807551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.181 [2024-12-09 05:20:52.807644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.181 [2024-12-09 05:20:52.807660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.181 [2024-12-09 05:20:52.807667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.181 [2024-12-09 05:20:52.807673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.181 [2024-12-09 05:20:52.807689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.181 qpair failed and we were unable to recover it. 00:26:16.181 [2024-12-09 05:20:52.817641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.181 [2024-12-09 05:20:52.817703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.181 [2024-12-09 05:20:52.817718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.181 [2024-12-09 05:20:52.817726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.181 [2024-12-09 05:20:52.817732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.181 [2024-12-09 05:20:52.817747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.181 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:52.827623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.441 [2024-12-09 05:20:52.827690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.441 [2024-12-09 05:20:52.827706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.441 [2024-12-09 05:20:52.827713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.441 [2024-12-09 05:20:52.827719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.441 [2024-12-09 05:20:52.827735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.441 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:52.837715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.441 [2024-12-09 05:20:52.837776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.441 [2024-12-09 05:20:52.837791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.441 [2024-12-09 05:20:52.837799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.441 [2024-12-09 05:20:52.837805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.441 [2024-12-09 05:20:52.837820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.441 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:52.847751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.441 [2024-12-09 05:20:52.847839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.441 [2024-12-09 05:20:52.847853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.441 [2024-12-09 05:20:52.847860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.441 [2024-12-09 05:20:52.847867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.441 [2024-12-09 05:20:52.847882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.441 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:52.857686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.441 [2024-12-09 05:20:52.857747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.441 [2024-12-09 05:20:52.857762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.441 [2024-12-09 05:20:52.857769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.441 [2024-12-09 05:20:52.857775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.441 [2024-12-09 05:20:52.857790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.441 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:52.867861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.441 [2024-12-09 05:20:52.867942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.441 [2024-12-09 05:20:52.867958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.441 [2024-12-09 05:20:52.867966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.441 [2024-12-09 05:20:52.867972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a7fbe0 00:26:16.441 [2024-12-09 05:20:52.867987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.441 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:52.877839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.441 [2024-12-09 05:20:52.877966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.441 [2024-12-09 05:20:52.877996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.441 [2024-12-09 05:20:52.878013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.441 [2024-12-09 05:20:52.878023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.441 [2024-12-09 05:20:52.878049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.441 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:52.887889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.441 [2024-12-09 05:20:52.887955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.441 [2024-12-09 05:20:52.887971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.441 [2024-12-09 05:20:52.887982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.441 [2024-12-09 05:20:52.887988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.441 [2024-12-09 05:20:52.888010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.441 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:52.897905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.441 [2024-12-09 05:20:52.897965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.441 [2024-12-09 05:20:52.897983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.441 [2024-12-09 05:20:52.897990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.441 [2024-12-09 05:20:52.898002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.441 [2024-12-09 05:20:52.898019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.441 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:52.907897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.441 [2024-12-09 05:20:52.907961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.441 [2024-12-09 05:20:52.907976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.441 [2024-12-09 05:20:52.907983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.441 [2024-12-09 05:20:52.907989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.441 [2024-12-09 05:20:52.908010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.441 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:52.917925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.441 [2024-12-09 05:20:52.917991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.441 [2024-12-09 05:20:52.918012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.441 [2024-12-09 05:20:52.918021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.441 [2024-12-09 05:20:52.918027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.441 [2024-12-09 05:20:52.918044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.441 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:52.927948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.441 [2024-12-09 05:20:52.928014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.441 [2024-12-09 05:20:52.928029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.441 [2024-12-09 05:20:52.928037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.441 [2024-12-09 05:20:52.928045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.441 [2024-12-09 05:20:52.928066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.441 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:52.937958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.441 [2024-12-09 05:20:52.938023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.441 [2024-12-09 05:20:52.938038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.441 [2024-12-09 05:20:52.938046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.441 [2024-12-09 05:20:52.938052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.441 [2024-12-09 05:20:52.938068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.441 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:52.948063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.441 [2024-12-09 05:20:52.948127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.441 [2024-12-09 05:20:52.948142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.441 [2024-12-09 05:20:52.948150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.441 [2024-12-09 05:20:52.948156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.441 [2024-12-09 05:20:52.948172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.441 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:52.957981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.441 [2024-12-09 05:20:52.958081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.441 [2024-12-09 05:20:52.958096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.441 [2024-12-09 05:20:52.958103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.441 [2024-12-09 05:20:52.958111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.441 [2024-12-09 05:20:52.958127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.441 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:52.968057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.441 [2024-12-09 05:20:52.968159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.441 [2024-12-09 05:20:52.968174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.441 [2024-12-09 05:20:52.968181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.441 [2024-12-09 05:20:52.968188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.441 [2024-12-09 05:20:52.968205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.441 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:52.978063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.441 [2024-12-09 05:20:52.978151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.441 [2024-12-09 05:20:52.978167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.441 [2024-12-09 05:20:52.978175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.441 [2024-12-09 05:20:52.978181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.441 [2024-12-09 05:20:52.978197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.441 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:52.988147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.441 [2024-12-09 05:20:52.988209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.441 [2024-12-09 05:20:52.988225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.441 [2024-12-09 05:20:52.988233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.441 [2024-12-09 05:20:52.988239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.441 [2024-12-09 05:20:52.988255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.441 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:52.998139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.441 [2024-12-09 05:20:52.998204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.441 [2024-12-09 05:20:52.998221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.441 [2024-12-09 05:20:52.998228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.441 [2024-12-09 05:20:52.998235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.441 [2024-12-09 05:20:52.998250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.441 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:53.008178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.441 [2024-12-09 05:20:53.008241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.441 [2024-12-09 05:20:53.008255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.441 [2024-12-09 05:20:53.008263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.441 [2024-12-09 05:20:53.008270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.441 [2024-12-09 05:20:53.008286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.441 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:53.018240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.441 [2024-12-09 05:20:53.018301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.441 [2024-12-09 05:20:53.018319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.441 [2024-12-09 05:20:53.018327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.441 [2024-12-09 05:20:53.018333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.441 [2024-12-09 05:20:53.018349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.441 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:53.028252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.441 [2024-12-09 05:20:53.028327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.441 [2024-12-09 05:20:53.028341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.441 [2024-12-09 05:20:53.028348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.441 [2024-12-09 05:20:53.028355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.441 [2024-12-09 05:20:53.028370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.441 qpair failed and we were unable to recover it. 00:26:16.441 [2024-12-09 05:20:53.038253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.442 [2024-12-09 05:20:53.038351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.442 [2024-12-09 05:20:53.038370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.442 [2024-12-09 05:20:53.038379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.442 [2024-12-09 05:20:53.038386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.442 [2024-12-09 05:20:53.038403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.442 qpair failed and we were unable to recover it. 00:26:16.442 [2024-12-09 05:20:53.048276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.442 [2024-12-09 05:20:53.048339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.442 [2024-12-09 05:20:53.048354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.442 [2024-12-09 05:20:53.048362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.442 [2024-12-09 05:20:53.048368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.442 [2024-12-09 05:20:53.048384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.442 qpair failed and we were unable to recover it. 00:26:16.442 [2024-12-09 05:20:53.058255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.442 [2024-12-09 05:20:53.058314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.442 [2024-12-09 05:20:53.058328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.442 [2024-12-09 05:20:53.058336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.442 [2024-12-09 05:20:53.058345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.442 [2024-12-09 05:20:53.058361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.442 qpair failed and we were unable to recover it. 00:26:16.442 [2024-12-09 05:20:53.068398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.442 [2024-12-09 05:20:53.068465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.442 [2024-12-09 05:20:53.068481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.442 [2024-12-09 05:20:53.068488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.442 [2024-12-09 05:20:53.068495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.442 [2024-12-09 05:20:53.068510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.442 qpair failed and we were unable to recover it. 00:26:16.442 [2024-12-09 05:20:53.078363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.442 [2024-12-09 05:20:53.078453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.442 [2024-12-09 05:20:53.078468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.442 [2024-12-09 05:20:53.078475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.442 [2024-12-09 05:20:53.078481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.442 [2024-12-09 05:20:53.078497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.442 qpair failed and we were unable to recover it. 00:26:16.702 [2024-12-09 05:20:53.088494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.702 [2024-12-09 05:20:53.088584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.702 [2024-12-09 05:20:53.088600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.702 [2024-12-09 05:20:53.088607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.702 [2024-12-09 05:20:53.088614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.702 [2024-12-09 05:20:53.088630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.702 qpair failed and we were unable to recover it. 00:26:16.702 [2024-12-09 05:20:53.098448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.702 [2024-12-09 05:20:53.098512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.702 [2024-12-09 05:20:53.098527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.702 [2024-12-09 05:20:53.098535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.702 [2024-12-09 05:20:53.098541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.702 [2024-12-09 05:20:53.098559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.702 qpair failed and we were unable to recover it. 00:26:16.702 [2024-12-09 05:20:53.108467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.702 [2024-12-09 05:20:53.108532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.702 [2024-12-09 05:20:53.108550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.702 [2024-12-09 05:20:53.108557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.702 [2024-12-09 05:20:53.108564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.702 [2024-12-09 05:20:53.108580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.702 qpair failed and we were unable to recover it. 00:26:16.702 [2024-12-09 05:20:53.118519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.702 [2024-12-09 05:20:53.118583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.702 [2024-12-09 05:20:53.118600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.702 [2024-12-09 05:20:53.118607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.702 [2024-12-09 05:20:53.118614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.702 [2024-12-09 05:20:53.118629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.702 qpair failed and we were unable to recover it. 00:26:16.702 [2024-12-09 05:20:53.128451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.702 [2024-12-09 05:20:53.128512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.702 [2024-12-09 05:20:53.128529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.702 [2024-12-09 05:20:53.128537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.702 [2024-12-09 05:20:53.128544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.702 [2024-12-09 05:20:53.128560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.702 qpair failed and we were unable to recover it. 00:26:16.702 [2024-12-09 05:20:53.138478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.702 [2024-12-09 05:20:53.138539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.702 [2024-12-09 05:20:53.138555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.702 [2024-12-09 05:20:53.138562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.702 [2024-12-09 05:20:53.138568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.702 [2024-12-09 05:20:53.138584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.702 qpair failed and we were unable to recover it. 00:26:16.702 [2024-12-09 05:20:53.148558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.702 [2024-12-09 05:20:53.148629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.702 [2024-12-09 05:20:53.148647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.702 [2024-12-09 05:20:53.148655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.702 [2024-12-09 05:20:53.148661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.702 [2024-12-09 05:20:53.148678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.702 qpair failed and we were unable to recover it. 00:26:16.702 [2024-12-09 05:20:53.158620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.702 [2024-12-09 05:20:53.158682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.702 [2024-12-09 05:20:53.158699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.702 [2024-12-09 05:20:53.158707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.702 [2024-12-09 05:20:53.158714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.702 [2024-12-09 05:20:53.158730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.702 qpair failed and we were unable to recover it. 00:26:16.702 [2024-12-09 05:20:53.168617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.702 [2024-12-09 05:20:53.168677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.702 [2024-12-09 05:20:53.168692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.702 [2024-12-09 05:20:53.168700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.702 [2024-12-09 05:20:53.168706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.702 [2024-12-09 05:20:53.168722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.702 qpair failed and we were unable to recover it. 00:26:16.702 [2024-12-09 05:20:53.178704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.702 [2024-12-09 05:20:53.178770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.702 [2024-12-09 05:20:53.178786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.702 [2024-12-09 05:20:53.178794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.702 [2024-12-09 05:20:53.178800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.702 [2024-12-09 05:20:53.178817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.702 qpair failed and we were unable to recover it. 00:26:16.702 [2024-12-09 05:20:53.188644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.702 [2024-12-09 05:20:53.188705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.702 [2024-12-09 05:20:53.188722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.703 [2024-12-09 05:20:53.188731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.703 [2024-12-09 05:20:53.188741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.703 [2024-12-09 05:20:53.188757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.703 qpair failed and we were unable to recover it. 00:26:16.703 [2024-12-09 05:20:53.198646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.703 [2024-12-09 05:20:53.198709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.703 [2024-12-09 05:20:53.198724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.703 [2024-12-09 05:20:53.198732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.703 [2024-12-09 05:20:53.198739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.703 [2024-12-09 05:20:53.198754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.703 qpair failed and we were unable to recover it. 00:26:16.703 [2024-12-09 05:20:53.208740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.703 [2024-12-09 05:20:53.208802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.703 [2024-12-09 05:20:53.208817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.703 [2024-12-09 05:20:53.208825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.703 [2024-12-09 05:20:53.208831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.703 [2024-12-09 05:20:53.208846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.703 qpair failed and we were unable to recover it. 00:26:16.703 [2024-12-09 05:20:53.218739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.703 [2024-12-09 05:20:53.218803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.703 [2024-12-09 05:20:53.218821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.703 [2024-12-09 05:20:53.218829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.703 [2024-12-09 05:20:53.218836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.703 [2024-12-09 05:20:53.218852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.703 qpair failed and we were unable to recover it. 00:26:16.703 [2024-12-09 05:20:53.228797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.703 [2024-12-09 05:20:53.228890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.703 [2024-12-09 05:20:53.228905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.703 [2024-12-09 05:20:53.228912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.703 [2024-12-09 05:20:53.228918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.703 [2024-12-09 05:20:53.228934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.703 qpair failed and we were unable to recover it. 00:26:16.703 [2024-12-09 05:20:53.238821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.703 [2024-12-09 05:20:53.238899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.703 [2024-12-09 05:20:53.238915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.703 [2024-12-09 05:20:53.238923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.703 [2024-12-09 05:20:53.238929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.703 [2024-12-09 05:20:53.238944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.703 qpair failed and we were unable to recover it. 00:26:16.703 [2024-12-09 05:20:53.248792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.703 [2024-12-09 05:20:53.248850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.703 [2024-12-09 05:20:53.248865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.703 [2024-12-09 05:20:53.248873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.703 [2024-12-09 05:20:53.248879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.703 [2024-12-09 05:20:53.248895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.703 qpair failed and we were unable to recover it. 00:26:16.703 [2024-12-09 05:20:53.258864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.703 [2024-12-09 05:20:53.258923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.703 [2024-12-09 05:20:53.258939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.703 [2024-12-09 05:20:53.258946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.703 [2024-12-09 05:20:53.258954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.703 [2024-12-09 05:20:53.258970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.703 qpair failed and we were unable to recover it. 00:26:16.703 [2024-12-09 05:20:53.268928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.703 [2024-12-09 05:20:53.268992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.703 [2024-12-09 05:20:53.269013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.703 [2024-12-09 05:20:53.269021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.703 [2024-12-09 05:20:53.269028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.703 [2024-12-09 05:20:53.269045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.703 qpair failed and we were unable to recover it. 00:26:16.703 [2024-12-09 05:20:53.278982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.703 [2024-12-09 05:20:53.279058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.703 [2024-12-09 05:20:53.279074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.703 [2024-12-09 05:20:53.279081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.703 [2024-12-09 05:20:53.279089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.703 [2024-12-09 05:20:53.279105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.703 qpair failed and we were unable to recover it. 00:26:16.703 [2024-12-09 05:20:53.289010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.703 [2024-12-09 05:20:53.289109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.703 [2024-12-09 05:20:53.289125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.703 [2024-12-09 05:20:53.289132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.703 [2024-12-09 05:20:53.289139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.703 [2024-12-09 05:20:53.289155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.703 qpair failed and we were unable to recover it. 00:26:16.703 [2024-12-09 05:20:53.299112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.703 [2024-12-09 05:20:53.299214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.703 [2024-12-09 05:20:53.299230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.703 [2024-12-09 05:20:53.299237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.703 [2024-12-09 05:20:53.299244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.703 [2024-12-09 05:20:53.299260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.703 qpair failed and we were unable to recover it. 00:26:16.703 [2024-12-09 05:20:53.309119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.703 [2024-12-09 05:20:53.309181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.703 [2024-12-09 05:20:53.309195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.703 [2024-12-09 05:20:53.309203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.703 [2024-12-09 05:20:53.309209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.703 [2024-12-09 05:20:53.309225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.703 qpair failed and we were unable to recover it. 00:26:16.703 [2024-12-09 05:20:53.319045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.703 [2024-12-09 05:20:53.319108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.703 [2024-12-09 05:20:53.319123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.704 [2024-12-09 05:20:53.319134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.704 [2024-12-09 05:20:53.319140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.704 [2024-12-09 05:20:53.319156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.704 qpair failed and we were unable to recover it. 00:26:16.704 [2024-12-09 05:20:53.329117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.704 [2024-12-09 05:20:53.329178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.704 [2024-12-09 05:20:53.329194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.704 [2024-12-09 05:20:53.329201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.704 [2024-12-09 05:20:53.329207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.704 [2024-12-09 05:20:53.329223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.704 qpair failed and we were unable to recover it. 00:26:16.704 [2024-12-09 05:20:53.339109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.704 [2024-12-09 05:20:53.339169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.704 [2024-12-09 05:20:53.339184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.704 [2024-12-09 05:20:53.339191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.704 [2024-12-09 05:20:53.339198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.704 [2024-12-09 05:20:53.339214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.704 qpair failed and we were unable to recover it. 00:26:16.964 [2024-12-09 05:20:53.349172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.964 [2024-12-09 05:20:53.349239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.964 [2024-12-09 05:20:53.349255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.964 [2024-12-09 05:20:53.349262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.964 [2024-12-09 05:20:53.349269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.964 [2024-12-09 05:20:53.349284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.964 qpair failed and we were unable to recover it. 00:26:16.964 [2024-12-09 05:20:53.359207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.964 [2024-12-09 05:20:53.359296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.964 [2024-12-09 05:20:53.359310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.964 [2024-12-09 05:20:53.359318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.964 [2024-12-09 05:20:53.359324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.964 [2024-12-09 05:20:53.359344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.964 qpair failed and we were unable to recover it. 00:26:16.964 [2024-12-09 05:20:53.369197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.964 [2024-12-09 05:20:53.369258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.964 [2024-12-09 05:20:53.369273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.964 [2024-12-09 05:20:53.369280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.964 [2024-12-09 05:20:53.369287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.964 [2024-12-09 05:20:53.369303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.964 qpair failed and we were unable to recover it. 00:26:16.964 [2024-12-09 05:20:53.379235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.964 [2024-12-09 05:20:53.379295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.964 [2024-12-09 05:20:53.379311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.964 [2024-12-09 05:20:53.379319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.964 [2024-12-09 05:20:53.379325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.964 [2024-12-09 05:20:53.379341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.964 qpair failed and we were unable to recover it. 00:26:16.964 [2024-12-09 05:20:53.389282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.964 [2024-12-09 05:20:53.389349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.964 [2024-12-09 05:20:53.389364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.964 [2024-12-09 05:20:53.389372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.964 [2024-12-09 05:20:53.389379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.964 [2024-12-09 05:20:53.389395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.964 qpair failed and we were unable to recover it. 00:26:16.964 [2024-12-09 05:20:53.399302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.964 [2024-12-09 05:20:53.399366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.964 [2024-12-09 05:20:53.399381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.964 [2024-12-09 05:20:53.399389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.964 [2024-12-09 05:20:53.399396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.964 [2024-12-09 05:20:53.399412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.964 qpair failed and we were unable to recover it. 00:26:16.964 [2024-12-09 05:20:53.409361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.964 [2024-12-09 05:20:53.409470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.964 [2024-12-09 05:20:53.409485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.964 [2024-12-09 05:20:53.409493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.964 [2024-12-09 05:20:53.409500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.964 [2024-12-09 05:20:53.409515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.964 qpair failed and we were unable to recover it. 00:26:16.964 [2024-12-09 05:20:53.419336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.964 [2024-12-09 05:20:53.419400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.964 [2024-12-09 05:20:53.419414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.964 [2024-12-09 05:20:53.419422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.964 [2024-12-09 05:20:53.419428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.964 [2024-12-09 05:20:53.419444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.964 qpair failed and we were unable to recover it. 00:26:16.964 [2024-12-09 05:20:53.429370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.964 [2024-12-09 05:20:53.429434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.964 [2024-12-09 05:20:53.429451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.964 [2024-12-09 05:20:53.429462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.964 [2024-12-09 05:20:53.429470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.964 [2024-12-09 05:20:53.429488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.964 qpair failed and we were unable to recover it. 00:26:16.964 [2024-12-09 05:20:53.439387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.964 [2024-12-09 05:20:53.439449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.964 [2024-12-09 05:20:53.439464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.964 [2024-12-09 05:20:53.439471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.964 [2024-12-09 05:20:53.439478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.964 [2024-12-09 05:20:53.439494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.964 qpair failed and we were unable to recover it. 00:26:16.964 [2024-12-09 05:20:53.449427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.964 [2024-12-09 05:20:53.449487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.964 [2024-12-09 05:20:53.449502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.964 [2024-12-09 05:20:53.449513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.964 [2024-12-09 05:20:53.449519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.964 [2024-12-09 05:20:53.449534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.964 qpair failed and we were unable to recover it. 00:26:16.964 [2024-12-09 05:20:53.459441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.964 [2024-12-09 05:20:53.459503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.964 [2024-12-09 05:20:53.459519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.964 [2024-12-09 05:20:53.459527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.964 [2024-12-09 05:20:53.459534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.964 [2024-12-09 05:20:53.459550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.964 qpair failed and we were unable to recover it. 00:26:16.965 [2024-12-09 05:20:53.469484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.965 [2024-12-09 05:20:53.469576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.965 [2024-12-09 05:20:53.469591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.965 [2024-12-09 05:20:53.469598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.965 [2024-12-09 05:20:53.469605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.965 [2024-12-09 05:20:53.469620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.965 qpair failed and we were unable to recover it. 00:26:16.965 [2024-12-09 05:20:53.479541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.965 [2024-12-09 05:20:53.479608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.965 [2024-12-09 05:20:53.479624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.965 [2024-12-09 05:20:53.479632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.965 [2024-12-09 05:20:53.479638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.965 [2024-12-09 05:20:53.479654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.965 qpair failed and we were unable to recover it. 00:26:16.965 [2024-12-09 05:20:53.489581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.965 [2024-12-09 05:20:53.489666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.965 [2024-12-09 05:20:53.489681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.965 [2024-12-09 05:20:53.489689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.965 [2024-12-09 05:20:53.489695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.965 [2024-12-09 05:20:53.489714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.965 qpair failed and we were unable to recover it. 00:26:16.965 [2024-12-09 05:20:53.499568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.965 [2024-12-09 05:20:53.499625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.965 [2024-12-09 05:20:53.499640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.965 [2024-12-09 05:20:53.499648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.965 [2024-12-09 05:20:53.499654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.965 [2024-12-09 05:20:53.499670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.965 qpair failed and we were unable to recover it. 00:26:16.965 [2024-12-09 05:20:53.509607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.965 [2024-12-09 05:20:53.509671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.965 [2024-12-09 05:20:53.509686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.965 [2024-12-09 05:20:53.509693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.965 [2024-12-09 05:20:53.509700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.965 [2024-12-09 05:20:53.509716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.965 qpair failed and we were unable to recover it. 00:26:16.965 [2024-12-09 05:20:53.519639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.965 [2024-12-09 05:20:53.519747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.965 [2024-12-09 05:20:53.519763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.965 [2024-12-09 05:20:53.519770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.965 [2024-12-09 05:20:53.519777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.965 [2024-12-09 05:20:53.519793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.965 qpair failed and we were unable to recover it. 00:26:16.965 [2024-12-09 05:20:53.529662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.965 [2024-12-09 05:20:53.529725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.965 [2024-12-09 05:20:53.529740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.965 [2024-12-09 05:20:53.529748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.965 [2024-12-09 05:20:53.529754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.965 [2024-12-09 05:20:53.529770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.965 qpair failed and we were unable to recover it. 00:26:16.965 [2024-12-09 05:20:53.539725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.965 [2024-12-09 05:20:53.539784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.965 [2024-12-09 05:20:53.539799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.965 [2024-12-09 05:20:53.539807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.965 [2024-12-09 05:20:53.539813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.965 [2024-12-09 05:20:53.539829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.965 qpair failed and we were unable to recover it. 00:26:16.965 [2024-12-09 05:20:53.549723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.965 [2024-12-09 05:20:53.549785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.965 [2024-12-09 05:20:53.549799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.965 [2024-12-09 05:20:53.549807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.965 [2024-12-09 05:20:53.549813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.965 [2024-12-09 05:20:53.549829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.965 qpair failed and we were unable to recover it. 00:26:16.965 [2024-12-09 05:20:53.559730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.965 [2024-12-09 05:20:53.559795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.965 [2024-12-09 05:20:53.559809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.965 [2024-12-09 05:20:53.559817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.965 [2024-12-09 05:20:53.559823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.965 [2024-12-09 05:20:53.559839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.965 qpair failed and we were unable to recover it. 00:26:16.965 [2024-12-09 05:20:53.569826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.965 [2024-12-09 05:20:53.569922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.965 [2024-12-09 05:20:53.569937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.965 [2024-12-09 05:20:53.569944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.965 [2024-12-09 05:20:53.569951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.965 [2024-12-09 05:20:53.569967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.965 qpair failed and we were unable to recover it. 00:26:16.965 [2024-12-09 05:20:53.579812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.965 [2024-12-09 05:20:53.579874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.965 [2024-12-09 05:20:53.579893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.965 [2024-12-09 05:20:53.579900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.965 [2024-12-09 05:20:53.579906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.965 [2024-12-09 05:20:53.579922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.965 qpair failed and we were unable to recover it. 00:26:16.965 [2024-12-09 05:20:53.589922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.965 [2024-12-09 05:20:53.589989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.965 [2024-12-09 05:20:53.590009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.965 [2024-12-09 05:20:53.590017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.965 [2024-12-09 05:20:53.590024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.965 [2024-12-09 05:20:53.590040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.966 qpair failed and we were unable to recover it. 00:26:16.966 [2024-12-09 05:20:53.599929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:16.966 [2024-12-09 05:20:53.600032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:16.966 [2024-12-09 05:20:53.600049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:16.966 [2024-12-09 05:20:53.600056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:16.966 [2024-12-09 05:20:53.600063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:16.966 [2024-12-09 05:20:53.600080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.966 qpair failed and we were unable to recover it. 00:26:17.225 [2024-12-09 05:20:53.609878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.225 [2024-12-09 05:20:53.609943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.225 [2024-12-09 05:20:53.609958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.225 [2024-12-09 05:20:53.609966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.225 [2024-12-09 05:20:53.609972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.225 [2024-12-09 05:20:53.609988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.225 qpair failed and we were unable to recover it. 00:26:17.225 [2024-12-09 05:20:53.619985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.225 [2024-12-09 05:20:53.620100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.225 [2024-12-09 05:20:53.620116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.225 [2024-12-09 05:20:53.620123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.225 [2024-12-09 05:20:53.620133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.225 [2024-12-09 05:20:53.620149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.225 qpair failed and we were unable to recover it. 00:26:17.225 [2024-12-09 05:20:53.629941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.225 [2024-12-09 05:20:53.630015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.225 [2024-12-09 05:20:53.630031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.225 [2024-12-09 05:20:53.630038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.225 [2024-12-09 05:20:53.630044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.225 [2024-12-09 05:20:53.630060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.225 qpair failed and we were unable to recover it. 00:26:17.225 [2024-12-09 05:20:53.640006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.225 [2024-12-09 05:20:53.640069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.225 [2024-12-09 05:20:53.640084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.225 [2024-12-09 05:20:53.640092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.225 [2024-12-09 05:20:53.640099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.225 [2024-12-09 05:20:53.640114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.225 qpair failed and we were unable to recover it. 00:26:17.225 [2024-12-09 05:20:53.649974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.225 [2024-12-09 05:20:53.650060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.225 [2024-12-09 05:20:53.650076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.225 [2024-12-09 05:20:53.650083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.225 [2024-12-09 05:20:53.650090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.225 [2024-12-09 05:20:53.650106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.225 qpair failed and we were unable to recover it. 00:26:17.225 [2024-12-09 05:20:53.660045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.225 [2024-12-09 05:20:53.660104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.225 [2024-12-09 05:20:53.660119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.225 [2024-12-09 05:20:53.660127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.225 [2024-12-09 05:20:53.660133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.225 [2024-12-09 05:20:53.660149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.225 qpair failed and we were unable to recover it. 00:26:17.225 [2024-12-09 05:20:53.670089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.225 [2024-12-09 05:20:53.670152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.225 [2024-12-09 05:20:53.670167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.225 [2024-12-09 05:20:53.670175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.225 [2024-12-09 05:20:53.670181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.225 [2024-12-09 05:20:53.670197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.225 qpair failed and we were unable to recover it. 00:26:17.225 [2024-12-09 05:20:53.680166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.225 [2024-12-09 05:20:53.680229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.225 [2024-12-09 05:20:53.680244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.225 [2024-12-09 05:20:53.680252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.225 [2024-12-09 05:20:53.680259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.226 [2024-12-09 05:20:53.680275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.226 qpair failed and we were unable to recover it. 00:26:17.226 [2024-12-09 05:20:53.690151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.226 [2024-12-09 05:20:53.690215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.226 [2024-12-09 05:20:53.690230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.226 [2024-12-09 05:20:53.690237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.226 [2024-12-09 05:20:53.690244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.226 [2024-12-09 05:20:53.690259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.226 qpair failed and we were unable to recover it. 00:26:17.226 [2024-12-09 05:20:53.700171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.226 [2024-12-09 05:20:53.700233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.226 [2024-12-09 05:20:53.700249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.226 [2024-12-09 05:20:53.700256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.226 [2024-12-09 05:20:53.700263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.226 [2024-12-09 05:20:53.700278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.226 qpair failed and we were unable to recover it. 00:26:17.226 [2024-12-09 05:20:53.710233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.226 [2024-12-09 05:20:53.710296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.226 [2024-12-09 05:20:53.710314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.226 [2024-12-09 05:20:53.710322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.226 [2024-12-09 05:20:53.710328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.226 [2024-12-09 05:20:53.710344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.226 qpair failed and we were unable to recover it. 00:26:17.226 [2024-12-09 05:20:53.720232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.226 [2024-12-09 05:20:53.720296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.226 [2024-12-09 05:20:53.720311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.226 [2024-12-09 05:20:53.720318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.226 [2024-12-09 05:20:53.720325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.226 [2024-12-09 05:20:53.720341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.226 qpair failed and we were unable to recover it. 00:26:17.226 [2024-12-09 05:20:53.730250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.226 [2024-12-09 05:20:53.730311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.226 [2024-12-09 05:20:53.730325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.226 [2024-12-09 05:20:53.730333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.226 [2024-12-09 05:20:53.730339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.226 [2024-12-09 05:20:53.730355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.226 qpair failed and we were unable to recover it. 00:26:17.226 [2024-12-09 05:20:53.740300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.226 [2024-12-09 05:20:53.740416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.226 [2024-12-09 05:20:53.740433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.226 [2024-12-09 05:20:53.740441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.226 [2024-12-09 05:20:53.740447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.226 [2024-12-09 05:20:53.740463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.226 qpair failed and we were unable to recover it. 00:26:17.226 [2024-12-09 05:20:53.750310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.226 [2024-12-09 05:20:53.750375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.226 [2024-12-09 05:20:53.750390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.226 [2024-12-09 05:20:53.750397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.226 [2024-12-09 05:20:53.750407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.226 [2024-12-09 05:20:53.750423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.226 qpair failed and we were unable to recover it. 00:26:17.226 [2024-12-09 05:20:53.760337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.226 [2024-12-09 05:20:53.760434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.226 [2024-12-09 05:20:53.760449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.226 [2024-12-09 05:20:53.760456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.226 [2024-12-09 05:20:53.760462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.226 [2024-12-09 05:20:53.760478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.226 qpair failed and we were unable to recover it. 00:26:17.226 [2024-12-09 05:20:53.770357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.226 [2024-12-09 05:20:53.770415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.226 [2024-12-09 05:20:53.770430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.226 [2024-12-09 05:20:53.770437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.226 [2024-12-09 05:20:53.770444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.226 [2024-12-09 05:20:53.770460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.226 qpair failed and we were unable to recover it. 00:26:17.226 [2024-12-09 05:20:53.780384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.226 [2024-12-09 05:20:53.780459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.226 [2024-12-09 05:20:53.780474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.226 [2024-12-09 05:20:53.780481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.226 [2024-12-09 05:20:53.780487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.226 [2024-12-09 05:20:53.780502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.226 qpair failed and we were unable to recover it. 00:26:17.226 [2024-12-09 05:20:53.790422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.226 [2024-12-09 05:20:53.790484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.226 [2024-12-09 05:20:53.790499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.226 [2024-12-09 05:20:53.790507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.226 [2024-12-09 05:20:53.790513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.226 [2024-12-09 05:20:53.790530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.226 qpair failed and we were unable to recover it. 00:26:17.226 [2024-12-09 05:20:53.800490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.226 [2024-12-09 05:20:53.800551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.226 [2024-12-09 05:20:53.800567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.226 [2024-12-09 05:20:53.800574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.226 [2024-12-09 05:20:53.800581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.226 [2024-12-09 05:20:53.800597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.226 qpair failed and we were unable to recover it. 00:26:17.226 [2024-12-09 05:20:53.810519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.226 [2024-12-09 05:20:53.810583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.226 [2024-12-09 05:20:53.810598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.226 [2024-12-09 05:20:53.810606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.226 [2024-12-09 05:20:53.810612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.227 [2024-12-09 05:20:53.810628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.227 qpair failed and we were unable to recover it. 00:26:17.227 [2024-12-09 05:20:53.820503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.227 [2024-12-09 05:20:53.820594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.227 [2024-12-09 05:20:53.820610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.227 [2024-12-09 05:20:53.820618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.227 [2024-12-09 05:20:53.820624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.227 [2024-12-09 05:20:53.820640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.227 qpair failed and we were unable to recover it. 00:26:17.227 [2024-12-09 05:20:53.830537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.227 [2024-12-09 05:20:53.830601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.227 [2024-12-09 05:20:53.830615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.227 [2024-12-09 05:20:53.830623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.227 [2024-12-09 05:20:53.830629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.227 [2024-12-09 05:20:53.830645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.227 qpair failed and we were unable to recover it. 00:26:17.227 [2024-12-09 05:20:53.840567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.227 [2024-12-09 05:20:53.840633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.227 [2024-12-09 05:20:53.840648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.227 [2024-12-09 05:20:53.840655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.227 [2024-12-09 05:20:53.840662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.227 [2024-12-09 05:20:53.840678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.227 qpair failed and we were unable to recover it. 00:26:17.227 [2024-12-09 05:20:53.850626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.227 [2024-12-09 05:20:53.850689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.227 [2024-12-09 05:20:53.850704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.227 [2024-12-09 05:20:53.850712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.227 [2024-12-09 05:20:53.850718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.227 [2024-12-09 05:20:53.850733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.227 qpair failed and we were unable to recover it. 00:26:17.227 [2024-12-09 05:20:53.860614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.227 [2024-12-09 05:20:53.860674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.227 [2024-12-09 05:20:53.860688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.227 [2024-12-09 05:20:53.860696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.227 [2024-12-09 05:20:53.860702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.227 [2024-12-09 05:20:53.860717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.227 qpair failed and we were unable to recover it. 00:26:17.486 [2024-12-09 05:20:53.870653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.486 [2024-12-09 05:20:53.870741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.486 [2024-12-09 05:20:53.870756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.486 [2024-12-09 05:20:53.870763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.486 [2024-12-09 05:20:53.870769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.486 [2024-12-09 05:20:53.870784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.486 qpair failed and we were unable to recover it. 00:26:17.486 [2024-12-09 05:20:53.880676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.486 [2024-12-09 05:20:53.880738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.486 [2024-12-09 05:20:53.880753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.486 [2024-12-09 05:20:53.880764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.486 [2024-12-09 05:20:53.880770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.486 [2024-12-09 05:20:53.880786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.486 qpair failed and we were unable to recover it. 00:26:17.486 [2024-12-09 05:20:53.890730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.486 [2024-12-09 05:20:53.890791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.486 [2024-12-09 05:20:53.890807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.486 [2024-12-09 05:20:53.890815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.486 [2024-12-09 05:20:53.890821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.486 [2024-12-09 05:20:53.890837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.486 qpair failed and we were unable to recover it. 00:26:17.486 [2024-12-09 05:20:53.900712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.486 [2024-12-09 05:20:53.900780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.486 [2024-12-09 05:20:53.900796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.486 [2024-12-09 05:20:53.900803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.486 [2024-12-09 05:20:53.900810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.486 [2024-12-09 05:20:53.900826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.486 qpair failed and we were unable to recover it. 00:26:17.486 [2024-12-09 05:20:53.910806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.486 [2024-12-09 05:20:53.910869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.486 [2024-12-09 05:20:53.910884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.486 [2024-12-09 05:20:53.910893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.486 [2024-12-09 05:20:53.910903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.486 [2024-12-09 05:20:53.910921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.486 qpair failed and we were unable to recover it. 00:26:17.486 [2024-12-09 05:20:53.920749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.486 [2024-12-09 05:20:53.920820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.486 [2024-12-09 05:20:53.920835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.486 [2024-12-09 05:20:53.920842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.486 [2024-12-09 05:20:53.920848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.486 [2024-12-09 05:20:53.920867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.486 qpair failed and we were unable to recover it. 00:26:17.486 [2024-12-09 05:20:53.930796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.486 [2024-12-09 05:20:53.930859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.486 [2024-12-09 05:20:53.930874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.486 [2024-12-09 05:20:53.930881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.486 [2024-12-09 05:20:53.930888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.486 [2024-12-09 05:20:53.930903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.486 qpair failed and we were unable to recover it. 00:26:17.486 [2024-12-09 05:20:53.940827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.486 [2024-12-09 05:20:53.940890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.486 [2024-12-09 05:20:53.940906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.486 [2024-12-09 05:20:53.940913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.486 [2024-12-09 05:20:53.940919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.486 [2024-12-09 05:20:53.940936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.486 qpair failed and we were unable to recover it. 00:26:17.486 [2024-12-09 05:20:53.950871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.486 [2024-12-09 05:20:53.950935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.486 [2024-12-09 05:20:53.950949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.486 [2024-12-09 05:20:53.950957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.486 [2024-12-09 05:20:53.950964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.486 [2024-12-09 05:20:53.950979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.486 qpair failed and we were unable to recover it. 00:26:17.486 [2024-12-09 05:20:53.960890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.487 [2024-12-09 05:20:53.960956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.487 [2024-12-09 05:20:53.960970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.487 [2024-12-09 05:20:53.960978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.487 [2024-12-09 05:20:53.960984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.487 [2024-12-09 05:20:53.961003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.487 qpair failed and we were unable to recover it. 00:26:17.487 [2024-12-09 05:20:53.970941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.487 [2024-12-09 05:20:53.971052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.487 [2024-12-09 05:20:53.971068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.487 [2024-12-09 05:20:53.971076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.487 [2024-12-09 05:20:53.971083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.487 [2024-12-09 05:20:53.971099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.487 qpair failed and we were unable to recover it. 00:26:17.487 [2024-12-09 05:20:53.980943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.487 [2024-12-09 05:20:53.981005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.487 [2024-12-09 05:20:53.981024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.487 [2024-12-09 05:20:53.981032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.487 [2024-12-09 05:20:53.981038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.487 [2024-12-09 05:20:53.981055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.487 qpair failed and we were unable to recover it. 00:26:17.487 [2024-12-09 05:20:53.991021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.487 [2024-12-09 05:20:53.991118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.487 [2024-12-09 05:20:53.991134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.487 [2024-12-09 05:20:53.991141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.487 [2024-12-09 05:20:53.991147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.487 [2024-12-09 05:20:53.991163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.487 qpair failed and we were unable to recover it. 00:26:17.487 [2024-12-09 05:20:54.001060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.487 [2024-12-09 05:20:54.001125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.487 [2024-12-09 05:20:54.001140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.487 [2024-12-09 05:20:54.001149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.487 [2024-12-09 05:20:54.001155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.487 [2024-12-09 05:20:54.001171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.487 qpair failed and we were unable to recover it. 00:26:17.487 [2024-12-09 05:20:54.011088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.487 [2024-12-09 05:20:54.011193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.487 [2024-12-09 05:20:54.011211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.487 [2024-12-09 05:20:54.011219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.487 [2024-12-09 05:20:54.011226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.487 [2024-12-09 05:20:54.011242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.487 qpair failed and we were unable to recover it. 00:26:17.487 [2024-12-09 05:20:54.021095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.487 [2024-12-09 05:20:54.021173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.487 [2024-12-09 05:20:54.021188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.487 [2024-12-09 05:20:54.021196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.487 [2024-12-09 05:20:54.021202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.487 [2024-12-09 05:20:54.021217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.487 qpair failed and we were unable to recover it. 00:26:17.487 [2024-12-09 05:20:54.031093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.487 [2024-12-09 05:20:54.031158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.487 [2024-12-09 05:20:54.031172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.487 [2024-12-09 05:20:54.031181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.487 [2024-12-09 05:20:54.031187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.487 [2024-12-09 05:20:54.031203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.487 qpair failed and we were unable to recover it. 00:26:17.487 [2024-12-09 05:20:54.041115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.487 [2024-12-09 05:20:54.041179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.487 [2024-12-09 05:20:54.041194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.487 [2024-12-09 05:20:54.041202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.487 [2024-12-09 05:20:54.041208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.487 [2024-12-09 05:20:54.041225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.487 qpair failed and we were unable to recover it. 00:26:17.487 [2024-12-09 05:20:54.051163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.487 [2024-12-09 05:20:54.051226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.487 [2024-12-09 05:20:54.051242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.487 [2024-12-09 05:20:54.051249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.487 [2024-12-09 05:20:54.051256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.487 [2024-12-09 05:20:54.051277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.487 qpair failed and we were unable to recover it. 00:26:17.487 [2024-12-09 05:20:54.061165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.487 [2024-12-09 05:20:54.061225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.487 [2024-12-09 05:20:54.061240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.487 [2024-12-09 05:20:54.061247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.487 [2024-12-09 05:20:54.061254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.487 [2024-12-09 05:20:54.061270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.487 qpair failed and we were unable to recover it. 00:26:17.487 [2024-12-09 05:20:54.071207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.487 [2024-12-09 05:20:54.071272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.487 [2024-12-09 05:20:54.071287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.487 [2024-12-09 05:20:54.071295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.487 [2024-12-09 05:20:54.071301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.487 [2024-12-09 05:20:54.071317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.487 qpair failed and we were unable to recover it. 00:26:17.487 [2024-12-09 05:20:54.081229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.487 [2024-12-09 05:20:54.081294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.487 [2024-12-09 05:20:54.081309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.487 [2024-12-09 05:20:54.081316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.487 [2024-12-09 05:20:54.081323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.487 [2024-12-09 05:20:54.081338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.487 qpair failed and we were unable to recover it. 00:26:17.487 [2024-12-09 05:20:54.091316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.488 [2024-12-09 05:20:54.091421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.488 [2024-12-09 05:20:54.091437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.488 [2024-12-09 05:20:54.091444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.488 [2024-12-09 05:20:54.091450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.488 [2024-12-09 05:20:54.091467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.488 qpair failed and we were unable to recover it. 00:26:17.488 [2024-12-09 05:20:54.101282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.488 [2024-12-09 05:20:54.101341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.488 [2024-12-09 05:20:54.101357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.488 [2024-12-09 05:20:54.101365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.488 [2024-12-09 05:20:54.101372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.488 [2024-12-09 05:20:54.101388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.488 qpair failed and we were unable to recover it. 00:26:17.488 [2024-12-09 05:20:54.111247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.488 [2024-12-09 05:20:54.111311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.488 [2024-12-09 05:20:54.111326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.488 [2024-12-09 05:20:54.111334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.488 [2024-12-09 05:20:54.111340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.488 [2024-12-09 05:20:54.111356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.488 qpair failed and we were unable to recover it. 00:26:17.488 [2024-12-09 05:20:54.121377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.488 [2024-12-09 05:20:54.121440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.488 [2024-12-09 05:20:54.121454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.488 [2024-12-09 05:20:54.121462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.488 [2024-12-09 05:20:54.121468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.488 [2024-12-09 05:20:54.121484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.488 qpair failed and we were unable to recover it. 00:26:17.745 [2024-12-09 05:20:54.131351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.745 [2024-12-09 05:20:54.131416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.745 [2024-12-09 05:20:54.131431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.745 [2024-12-09 05:20:54.131438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.745 [2024-12-09 05:20:54.131445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.745 [2024-12-09 05:20:54.131460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.745 qpair failed and we were unable to recover it. 00:26:17.745 [2024-12-09 05:20:54.141382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.745 [2024-12-09 05:20:54.141439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.745 [2024-12-09 05:20:54.141458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.745 [2024-12-09 05:20:54.141465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.745 [2024-12-09 05:20:54.141472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.745 [2024-12-09 05:20:54.141488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.745 qpair failed and we were unable to recover it. 00:26:17.745 [2024-12-09 05:20:54.151430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.745 [2024-12-09 05:20:54.151503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.745 [2024-12-09 05:20:54.151518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.745 [2024-12-09 05:20:54.151525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.745 [2024-12-09 05:20:54.151532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.745 [2024-12-09 05:20:54.151548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.745 qpair failed and we were unable to recover it. 00:26:17.745 [2024-12-09 05:20:54.161455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.745 [2024-12-09 05:20:54.161517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.745 [2024-12-09 05:20:54.161532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.745 [2024-12-09 05:20:54.161539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.745 [2024-12-09 05:20:54.161546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.745 [2024-12-09 05:20:54.161561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.745 qpair failed and we were unable to recover it. 00:26:17.745 [2024-12-09 05:20:54.171451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.745 [2024-12-09 05:20:54.171511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.745 [2024-12-09 05:20:54.171526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.745 [2024-12-09 05:20:54.171533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.745 [2024-12-09 05:20:54.171540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.745 [2024-12-09 05:20:54.171555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.745 qpair failed and we were unable to recover it. 00:26:17.745 [2024-12-09 05:20:54.181492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.745 [2024-12-09 05:20:54.181554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.745 [2024-12-09 05:20:54.181569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.745 [2024-12-09 05:20:54.181576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.745 [2024-12-09 05:20:54.181586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.745 [2024-12-09 05:20:54.181601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.745 qpair failed and we were unable to recover it. 00:26:17.745 [2024-12-09 05:20:54.191485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.745 [2024-12-09 05:20:54.191550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.745 [2024-12-09 05:20:54.191565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.745 [2024-12-09 05:20:54.191573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.745 [2024-12-09 05:20:54.191579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.745 [2024-12-09 05:20:54.191595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.745 qpair failed and we were unable to recover it. 00:26:17.745 [2024-12-09 05:20:54.201509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.745 [2024-12-09 05:20:54.201575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.745 [2024-12-09 05:20:54.201593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.745 [2024-12-09 05:20:54.201601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.745 [2024-12-09 05:20:54.201607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.745 [2024-12-09 05:20:54.201623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.745 qpair failed and we were unable to recover it. 00:26:17.745 [2024-12-09 05:20:54.211601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.745 [2024-12-09 05:20:54.211661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.746 [2024-12-09 05:20:54.211676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.746 [2024-12-09 05:20:54.211684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.746 [2024-12-09 05:20:54.211691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.746 [2024-12-09 05:20:54.211706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.746 qpair failed and we were unable to recover it. 00:26:17.746 [2024-12-09 05:20:54.221622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.746 [2024-12-09 05:20:54.221683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.746 [2024-12-09 05:20:54.221698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.746 [2024-12-09 05:20:54.221705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.746 [2024-12-09 05:20:54.221712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.746 [2024-12-09 05:20:54.221728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.746 qpair failed and we were unable to recover it. 00:26:17.746 [2024-12-09 05:20:54.231587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.746 [2024-12-09 05:20:54.231655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.746 [2024-12-09 05:20:54.231669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.746 [2024-12-09 05:20:54.231676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.746 [2024-12-09 05:20:54.231682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.746 [2024-12-09 05:20:54.231697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.746 qpair failed and we were unable to recover it. 00:26:17.746 [2024-12-09 05:20:54.241654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.746 [2024-12-09 05:20:54.241745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.746 [2024-12-09 05:20:54.241761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.746 [2024-12-09 05:20:54.241768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.746 [2024-12-09 05:20:54.241774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.746 [2024-12-09 05:20:54.241790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.746 qpair failed and we were unable to recover it. 00:26:17.746 [2024-12-09 05:20:54.251696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.746 [2024-12-09 05:20:54.251761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.746 [2024-12-09 05:20:54.251776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.746 [2024-12-09 05:20:54.251783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.746 [2024-12-09 05:20:54.251789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.746 [2024-12-09 05:20:54.251805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.746 qpair failed and we were unable to recover it. 00:26:17.746 [2024-12-09 05:20:54.261794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.746 [2024-12-09 05:20:54.261903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.746 [2024-12-09 05:20:54.261917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.746 [2024-12-09 05:20:54.261924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.746 [2024-12-09 05:20:54.261931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.746 [2024-12-09 05:20:54.261947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.746 qpair failed and we were unable to recover it. 00:26:17.746 [2024-12-09 05:20:54.271758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.746 [2024-12-09 05:20:54.271826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.746 [2024-12-09 05:20:54.271844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.746 [2024-12-09 05:20:54.271851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.746 [2024-12-09 05:20:54.271858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.746 [2024-12-09 05:20:54.271873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.746 qpair failed and we were unable to recover it. 00:26:17.746 [2024-12-09 05:20:54.281724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.746 [2024-12-09 05:20:54.281793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.746 [2024-12-09 05:20:54.281808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.746 [2024-12-09 05:20:54.281815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.746 [2024-12-09 05:20:54.281821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.746 [2024-12-09 05:20:54.281837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.746 qpair failed and we were unable to recover it. 00:26:17.746 [2024-12-09 05:20:54.291803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.746 [2024-12-09 05:20:54.291863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.746 [2024-12-09 05:20:54.291879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.746 [2024-12-09 05:20:54.291888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.746 [2024-12-09 05:20:54.291894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.746 [2024-12-09 05:20:54.291910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.746 qpair failed and we were unable to recover it. 00:26:17.746 [2024-12-09 05:20:54.301805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.746 [2024-12-09 05:20:54.301869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.746 [2024-12-09 05:20:54.301885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.746 [2024-12-09 05:20:54.301893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.746 [2024-12-09 05:20:54.301900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.746 [2024-12-09 05:20:54.301915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.746 qpair failed and we were unable to recover it. 00:26:17.746 [2024-12-09 05:20:54.311864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.746 [2024-12-09 05:20:54.311929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.746 [2024-12-09 05:20:54.311943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.746 [2024-12-09 05:20:54.311954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.746 [2024-12-09 05:20:54.311960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.746 [2024-12-09 05:20:54.311976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.746 qpair failed and we were unable to recover it. 00:26:17.746 [2024-12-09 05:20:54.321855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.746 [2024-12-09 05:20:54.321921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.746 [2024-12-09 05:20:54.321936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.746 [2024-12-09 05:20:54.321943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.746 [2024-12-09 05:20:54.321950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.746 [2024-12-09 05:20:54.321966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.746 qpair failed and we were unable to recover it. 00:26:17.746 [2024-12-09 05:20:54.331859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.746 [2024-12-09 05:20:54.331920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.746 [2024-12-09 05:20:54.331935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.746 [2024-12-09 05:20:54.331944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.746 [2024-12-09 05:20:54.331950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.746 [2024-12-09 05:20:54.331966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.746 qpair failed and we were unable to recover it. 00:26:17.746 [2024-12-09 05:20:54.341995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.746 [2024-12-09 05:20:54.342067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.746 [2024-12-09 05:20:54.342082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.746 [2024-12-09 05:20:54.342090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.746 [2024-12-09 05:20:54.342097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.746 [2024-12-09 05:20:54.342114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.746 qpair failed and we were unable to recover it. 00:26:17.746 [2024-12-09 05:20:54.351983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.746 [2024-12-09 05:20:54.352052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.746 [2024-12-09 05:20:54.352067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.746 [2024-12-09 05:20:54.352075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.746 [2024-12-09 05:20:54.352081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.747 [2024-12-09 05:20:54.352097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.747 qpair failed and we were unable to recover it. 00:26:17.747 [2024-12-09 05:20:54.361990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.747 [2024-12-09 05:20:54.362058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.747 [2024-12-09 05:20:54.362075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.747 [2024-12-09 05:20:54.362082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.747 [2024-12-09 05:20:54.362090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.747 [2024-12-09 05:20:54.362106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.747 qpair failed and we were unable to recover it. 00:26:17.747 [2024-12-09 05:20:54.372105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.747 [2024-12-09 05:20:54.372189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.747 [2024-12-09 05:20:54.372205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.747 [2024-12-09 05:20:54.372212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.747 [2024-12-09 05:20:54.372219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.747 [2024-12-09 05:20:54.372235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.747 qpair failed and we were unable to recover it. 00:26:17.747 [2024-12-09 05:20:54.382109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:17.747 [2024-12-09 05:20:54.382190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:17.747 [2024-12-09 05:20:54.382205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:17.747 [2024-12-09 05:20:54.382212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:17.747 [2024-12-09 05:20:54.382218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:17.747 [2024-12-09 05:20:54.382233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:17.747 qpair failed and we were unable to recover it. 00:26:18.004 [2024-12-09 05:20:54.392174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.004 [2024-12-09 05:20:54.392235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.004 [2024-12-09 05:20:54.392251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.004 [2024-12-09 05:20:54.392258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.004 [2024-12-09 05:20:54.392265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.004 [2024-12-09 05:20:54.392281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.004 qpair failed and we were unable to recover it. 00:26:18.004 [2024-12-09 05:20:54.402110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.004 [2024-12-09 05:20:54.402175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.004 [2024-12-09 05:20:54.402191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.004 [2024-12-09 05:20:54.402199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.004 [2024-12-09 05:20:54.402206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.004 [2024-12-09 05:20:54.402222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.004 qpair failed and we were unable to recover it. 00:26:18.004 [2024-12-09 05:20:54.412084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.004 [2024-12-09 05:20:54.412153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.004 [2024-12-09 05:20:54.412169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.004 [2024-12-09 05:20:54.412178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.004 [2024-12-09 05:20:54.412185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.004 [2024-12-09 05:20:54.412200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.004 qpair failed and we were unable to recover it. 00:26:18.004 [2024-12-09 05:20:54.422172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.004 [2024-12-09 05:20:54.422233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.004 [2024-12-09 05:20:54.422247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.004 [2024-12-09 05:20:54.422254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.004 [2024-12-09 05:20:54.422262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.004 [2024-12-09 05:20:54.422277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.004 qpair failed and we were unable to recover it. 00:26:18.004 [2024-12-09 05:20:54.432241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.004 [2024-12-09 05:20:54.432306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.004 [2024-12-09 05:20:54.432321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.004 [2024-12-09 05:20:54.432328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.004 [2024-12-09 05:20:54.432335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.004 [2024-12-09 05:20:54.432350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.004 qpair failed and we were unable to recover it. 00:26:18.004 [2024-12-09 05:20:54.442247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.005 [2024-12-09 05:20:54.442331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.005 [2024-12-09 05:20:54.442348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.005 [2024-12-09 05:20:54.442359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.005 [2024-12-09 05:20:54.442366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.005 [2024-12-09 05:20:54.442382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.005 qpair failed and we were unable to recover it. 00:26:18.005 [2024-12-09 05:20:54.452282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.005 [2024-12-09 05:20:54.452371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.005 [2024-12-09 05:20:54.452385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.005 [2024-12-09 05:20:54.452392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.005 [2024-12-09 05:20:54.452399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.005 [2024-12-09 05:20:54.452415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.005 qpair failed and we were unable to recover it. 00:26:18.005 [2024-12-09 05:20:54.462299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.005 [2024-12-09 05:20:54.462358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.005 [2024-12-09 05:20:54.462373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.005 [2024-12-09 05:20:54.462380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.005 [2024-12-09 05:20:54.462386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.005 [2024-12-09 05:20:54.462404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.005 qpair failed and we were unable to recover it. 00:26:18.005 [2024-12-09 05:20:54.472272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.005 [2024-12-09 05:20:54.472334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.005 [2024-12-09 05:20:54.472349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.005 [2024-12-09 05:20:54.472357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.005 [2024-12-09 05:20:54.472365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.005 [2024-12-09 05:20:54.472380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.005 qpair failed and we were unable to recover it. 00:26:18.005 [2024-12-09 05:20:54.482396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.005 [2024-12-09 05:20:54.482458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.005 [2024-12-09 05:20:54.482472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.005 [2024-12-09 05:20:54.482479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.005 [2024-12-09 05:20:54.482486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.005 [2024-12-09 05:20:54.482505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.005 qpair failed and we were unable to recover it. 00:26:18.005 [2024-12-09 05:20:54.492363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.005 [2024-12-09 05:20:54.492438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.005 [2024-12-09 05:20:54.492454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.005 [2024-12-09 05:20:54.492462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.005 [2024-12-09 05:20:54.492469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.005 [2024-12-09 05:20:54.492484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.005 qpair failed and we were unable to recover it. 00:26:18.005 [2024-12-09 05:20:54.502406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.005 [2024-12-09 05:20:54.502465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.005 [2024-12-09 05:20:54.502480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.005 [2024-12-09 05:20:54.502488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.005 [2024-12-09 05:20:54.502494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.005 [2024-12-09 05:20:54.502510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.005 qpair failed and we were unable to recover it. 00:26:18.005 [2024-12-09 05:20:54.512393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.005 [2024-12-09 05:20:54.512457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.005 [2024-12-09 05:20:54.512472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.005 [2024-12-09 05:20:54.512479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.005 [2024-12-09 05:20:54.512486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.005 [2024-12-09 05:20:54.512502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.005 qpair failed and we were unable to recover it. 00:26:18.005 [2024-12-09 05:20:54.522438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.005 [2024-12-09 05:20:54.522534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.005 [2024-12-09 05:20:54.522548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.005 [2024-12-09 05:20:54.522555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.005 [2024-12-09 05:20:54.522561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.005 [2024-12-09 05:20:54.522577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.005 qpair failed and we were unable to recover it. 00:26:18.005 [2024-12-09 05:20:54.532422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.005 [2024-12-09 05:20:54.532487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.005 [2024-12-09 05:20:54.532502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.005 [2024-12-09 05:20:54.532509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.005 [2024-12-09 05:20:54.532516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.005 [2024-12-09 05:20:54.532532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.005 qpair failed and we were unable to recover it. 00:26:18.005 [2024-12-09 05:20:54.542455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.005 [2024-12-09 05:20:54.542513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.005 [2024-12-09 05:20:54.542528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.005 [2024-12-09 05:20:54.542535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.005 [2024-12-09 05:20:54.542542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.005 [2024-12-09 05:20:54.542557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.005 qpair failed and we were unable to recover it. 00:26:18.005 [2024-12-09 05:20:54.552504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.005 [2024-12-09 05:20:54.552565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.005 [2024-12-09 05:20:54.552580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.005 [2024-12-09 05:20:54.552588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.005 [2024-12-09 05:20:54.552594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.005 [2024-12-09 05:20:54.552609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.005 qpair failed and we were unable to recover it. 00:26:18.005 [2024-12-09 05:20:54.562563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.005 [2024-12-09 05:20:54.562625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.005 [2024-12-09 05:20:54.562639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.005 [2024-12-09 05:20:54.562646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.005 [2024-12-09 05:20:54.562653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.005 [2024-12-09 05:20:54.562668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.005 qpair failed and we were unable to recover it. 00:26:18.005 [2024-12-09 05:20:54.572585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.005 [2024-12-09 05:20:54.572646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.005 [2024-12-09 05:20:54.572665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.005 [2024-12-09 05:20:54.572672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.005 [2024-12-09 05:20:54.572679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.005 [2024-12-09 05:20:54.572694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.005 qpair failed and we were unable to recover it. 00:26:18.005 [2024-12-09 05:20:54.582580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.005 [2024-12-09 05:20:54.582666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.005 [2024-12-09 05:20:54.582681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.005 [2024-12-09 05:20:54.582688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.005 [2024-12-09 05:20:54.582695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.005 [2024-12-09 05:20:54.582710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.005 qpair failed and we were unable to recover it. 00:26:18.005 [2024-12-09 05:20:54.592659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.005 [2024-12-09 05:20:54.592725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.005 [2024-12-09 05:20:54.592741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.005 [2024-12-09 05:20:54.592748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.005 [2024-12-09 05:20:54.592754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.005 [2024-12-09 05:20:54.592770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.005 qpair failed and we were unable to recover it. 00:26:18.005 [2024-12-09 05:20:54.602675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.005 [2024-12-09 05:20:54.602740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.005 [2024-12-09 05:20:54.602757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.005 [2024-12-09 05:20:54.602766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.005 [2024-12-09 05:20:54.602773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.005 [2024-12-09 05:20:54.602789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.005 qpair failed and we were unable to recover it. 00:26:18.005 [2024-12-09 05:20:54.612712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.005 [2024-12-09 05:20:54.612769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.005 [2024-12-09 05:20:54.612784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.005 [2024-12-09 05:20:54.612791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.005 [2024-12-09 05:20:54.612798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.005 [2024-12-09 05:20:54.612816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.005 qpair failed and we were unable to recover it. 00:26:18.005 [2024-12-09 05:20:54.622674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.005 [2024-12-09 05:20:54.622734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.005 [2024-12-09 05:20:54.622748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.005 [2024-12-09 05:20:54.622755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.005 [2024-12-09 05:20:54.622761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.005 [2024-12-09 05:20:54.622776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.005 qpair failed and we were unable to recover it. 00:26:18.005 [2024-12-09 05:20:54.632804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.005 [2024-12-09 05:20:54.632883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.005 [2024-12-09 05:20:54.632898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.005 [2024-12-09 05:20:54.632905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.005 [2024-12-09 05:20:54.632912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.005 [2024-12-09 05:20:54.632927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.005 qpair failed and we were unable to recover it. 00:26:18.005 [2024-12-09 05:20:54.642823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.005 [2024-12-09 05:20:54.642883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.005 [2024-12-09 05:20:54.642898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.005 [2024-12-09 05:20:54.642906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.005 [2024-12-09 05:20:54.642912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.005 [2024-12-09 05:20:54.642927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.005 qpair failed and we were unable to recover it. 00:26:18.265 [2024-12-09 05:20:54.652760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.265 [2024-12-09 05:20:54.652817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.265 [2024-12-09 05:20:54.652832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.265 [2024-12-09 05:20:54.652840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.265 [2024-12-09 05:20:54.652846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.265 [2024-12-09 05:20:54.652862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.265 qpair failed and we were unable to recover it. 00:26:18.265 [2024-12-09 05:20:54.662858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.265 [2024-12-09 05:20:54.662916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.265 [2024-12-09 05:20:54.662930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.265 [2024-12-09 05:20:54.662938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.265 [2024-12-09 05:20:54.662944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.265 [2024-12-09 05:20:54.662959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.265 qpair failed and we were unable to recover it. 00:26:18.265 [2024-12-09 05:20:54.672885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.265 [2024-12-09 05:20:54.672949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.265 [2024-12-09 05:20:54.672964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.265 [2024-12-09 05:20:54.672972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.265 [2024-12-09 05:20:54.672978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.265 [2024-12-09 05:20:54.672993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.265 qpair failed and we were unable to recover it. 00:26:18.265 [2024-12-09 05:20:54.682922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.265 [2024-12-09 05:20:54.682981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.265 [2024-12-09 05:20:54.682996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.265 [2024-12-09 05:20:54.683008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.265 [2024-12-09 05:20:54.683014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.265 [2024-12-09 05:20:54.683030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.265 qpair failed and we were unable to recover it. 00:26:18.265 [2024-12-09 05:20:54.692940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.265 [2024-12-09 05:20:54.693000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.265 [2024-12-09 05:20:54.693016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.265 [2024-12-09 05:20:54.693023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.265 [2024-12-09 05:20:54.693029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.265 [2024-12-09 05:20:54.693045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.265 qpair failed and we were unable to recover it. 00:26:18.265 [2024-12-09 05:20:54.702976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.265 [2024-12-09 05:20:54.703038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.265 [2024-12-09 05:20:54.703057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.265 [2024-12-09 05:20:54.703065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.265 [2024-12-09 05:20:54.703071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.265 [2024-12-09 05:20:54.703086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.265 qpair failed and we were unable to recover it. 00:26:18.265 [2024-12-09 05:20:54.713017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.265 [2024-12-09 05:20:54.713079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.265 [2024-12-09 05:20:54.713093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.265 [2024-12-09 05:20:54.713101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.265 [2024-12-09 05:20:54.713108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.265 [2024-12-09 05:20:54.713123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.265 qpair failed and we were unable to recover it. 00:26:18.265 [2024-12-09 05:20:54.723026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.265 [2024-12-09 05:20:54.723091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.265 [2024-12-09 05:20:54.723106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.265 [2024-12-09 05:20:54.723113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.265 [2024-12-09 05:20:54.723120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.265 [2024-12-09 05:20:54.723136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.265 qpair failed and we were unable to recover it. 00:26:18.265 [2024-12-09 05:20:54.732993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.265 [2024-12-09 05:20:54.733055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.265 [2024-12-09 05:20:54.733069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.265 [2024-12-09 05:20:54.733076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.265 [2024-12-09 05:20:54.733083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.265 [2024-12-09 05:20:54.733098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.265 qpair failed and we were unable to recover it. 00:26:18.265 [2024-12-09 05:20:54.743040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.265 [2024-12-09 05:20:54.743104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.265 [2024-12-09 05:20:54.743119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.265 [2024-12-09 05:20:54.743127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.265 [2024-12-09 05:20:54.743137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.265 [2024-12-09 05:20:54.743152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.265 qpair failed and we were unable to recover it. 00:26:18.265 [2024-12-09 05:20:54.753120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.265 [2024-12-09 05:20:54.753181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.265 [2024-12-09 05:20:54.753196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.265 [2024-12-09 05:20:54.753204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.265 [2024-12-09 05:20:54.753211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.265 [2024-12-09 05:20:54.753226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.265 qpair failed and we were unable to recover it. 00:26:18.265 [2024-12-09 05:20:54.763164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.265 [2024-12-09 05:20:54.763234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.265 [2024-12-09 05:20:54.763249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.265 [2024-12-09 05:20:54.763257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.265 [2024-12-09 05:20:54.763263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.265 [2024-12-09 05:20:54.763279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.265 qpair failed and we were unable to recover it. 00:26:18.265 [2024-12-09 05:20:54.773215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.265 [2024-12-09 05:20:54.773302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.265 [2024-12-09 05:20:54.773317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.265 [2024-12-09 05:20:54.773324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.265 [2024-12-09 05:20:54.773330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.265 [2024-12-09 05:20:54.773346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.265 qpair failed and we were unable to recover it. 00:26:18.265 [2024-12-09 05:20:54.783206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.265 [2024-12-09 05:20:54.783270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.265 [2024-12-09 05:20:54.783285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.265 [2024-12-09 05:20:54.783293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.265 [2024-12-09 05:20:54.783299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.265 [2024-12-09 05:20:54.783315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.265 qpair failed and we were unable to recover it. 00:26:18.265 [2024-12-09 05:20:54.793247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.265 [2024-12-09 05:20:54.793310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.265 [2024-12-09 05:20:54.793325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.265 [2024-12-09 05:20:54.793333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.265 [2024-12-09 05:20:54.793340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.265 [2024-12-09 05:20:54.793356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.265 qpair failed and we were unable to recover it. 00:26:18.265 [2024-12-09 05:20:54.803216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.265 [2024-12-09 05:20:54.803279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.265 [2024-12-09 05:20:54.803294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.265 [2024-12-09 05:20:54.803302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.265 [2024-12-09 05:20:54.803308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.265 [2024-12-09 05:20:54.803324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.265 qpair failed and we were unable to recover it. 00:26:18.265 [2024-12-09 05:20:54.813297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.265 [2024-12-09 05:20:54.813357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.265 [2024-12-09 05:20:54.813373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.265 [2024-12-09 05:20:54.813381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.265 [2024-12-09 05:20:54.813387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.265 [2024-12-09 05:20:54.813403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.265 qpair failed and we were unable to recover it. 00:26:18.265 [2024-12-09 05:20:54.823326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.265 [2024-12-09 05:20:54.823386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.265 [2024-12-09 05:20:54.823400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.266 [2024-12-09 05:20:54.823408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.266 [2024-12-09 05:20:54.823415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.266 [2024-12-09 05:20:54.823430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.266 qpair failed and we were unable to recover it. 00:26:18.266 [2024-12-09 05:20:54.833301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.266 [2024-12-09 05:20:54.833362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.266 [2024-12-09 05:20:54.833379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.266 [2024-12-09 05:20:54.833386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.266 [2024-12-09 05:20:54.833393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.266 [2024-12-09 05:20:54.833408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.266 qpair failed and we were unable to recover it. 00:26:18.266 [2024-12-09 05:20:54.843400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.266 [2024-12-09 05:20:54.843476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.266 [2024-12-09 05:20:54.843491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.266 [2024-12-09 05:20:54.843498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.266 [2024-12-09 05:20:54.843505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.266 [2024-12-09 05:20:54.843521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.266 qpair failed and we were unable to recover it. 00:26:18.266 [2024-12-09 05:20:54.853450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.266 [2024-12-09 05:20:54.853514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.266 [2024-12-09 05:20:54.853530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.266 [2024-12-09 05:20:54.853538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.266 [2024-12-09 05:20:54.853544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.266 [2024-12-09 05:20:54.853561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.266 qpair failed and we were unable to recover it. 00:26:18.266 [2024-12-09 05:20:54.863384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.266 [2024-12-09 05:20:54.863443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.266 [2024-12-09 05:20:54.863457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.266 [2024-12-09 05:20:54.863465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.266 [2024-12-09 05:20:54.863471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.266 [2024-12-09 05:20:54.863486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.266 qpair failed and we were unable to recover it. 00:26:18.266 [2024-12-09 05:20:54.873473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.266 [2024-12-09 05:20:54.873536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.266 [2024-12-09 05:20:54.873550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.266 [2024-12-09 05:20:54.873561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.266 [2024-12-09 05:20:54.873568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.266 [2024-12-09 05:20:54.873583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.266 qpair failed and we were unable to recover it. 00:26:18.266 [2024-12-09 05:20:54.883499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.266 [2024-12-09 05:20:54.883563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.266 [2024-12-09 05:20:54.883578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.266 [2024-12-09 05:20:54.883586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.266 [2024-12-09 05:20:54.883592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.266 [2024-12-09 05:20:54.883607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.266 qpair failed and we were unable to recover it. 00:26:18.266 [2024-12-09 05:20:54.893527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.266 [2024-12-09 05:20:54.893598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.266 [2024-12-09 05:20:54.893613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.266 [2024-12-09 05:20:54.893621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.266 [2024-12-09 05:20:54.893627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.266 [2024-12-09 05:20:54.893643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.266 qpair failed and we were unable to recover it. 00:26:18.266 [2024-12-09 05:20:54.903544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.266 [2024-12-09 05:20:54.903601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.266 [2024-12-09 05:20:54.903619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.266 [2024-12-09 05:20:54.903629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.266 [2024-12-09 05:20:54.903636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.266 [2024-12-09 05:20:54.903655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.266 qpair failed and we were unable to recover it. 00:26:18.526 [2024-12-09 05:20:54.913585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.526 [2024-12-09 05:20:54.913645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.526 [2024-12-09 05:20:54.913659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.526 [2024-12-09 05:20:54.913666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.526 [2024-12-09 05:20:54.913673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.526 [2024-12-09 05:20:54.913687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-12-09 05:20:54.923630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.526 [2024-12-09 05:20:54.923692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.526 [2024-12-09 05:20:54.923707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.526 [2024-12-09 05:20:54.923715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.526 [2024-12-09 05:20:54.923721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.526 [2024-12-09 05:20:54.923737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-12-09 05:20:54.933607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.526 [2024-12-09 05:20:54.933707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.526 [2024-12-09 05:20:54.933722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.526 [2024-12-09 05:20:54.933729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.526 [2024-12-09 05:20:54.933736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.526 [2024-12-09 05:20:54.933751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-12-09 05:20:54.943668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.526 [2024-12-09 05:20:54.943758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.526 [2024-12-09 05:20:54.943774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.526 [2024-12-09 05:20:54.943781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.526 [2024-12-09 05:20:54.943788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.526 [2024-12-09 05:20:54.943803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-12-09 05:20:54.953729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.526 [2024-12-09 05:20:54.953813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.526 [2024-12-09 05:20:54.953828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.526 [2024-12-09 05:20:54.953836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.526 [2024-12-09 05:20:54.953842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.526 [2024-12-09 05:20:54.953857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-12-09 05:20:54.963739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.526 [2024-12-09 05:20:54.963804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.526 [2024-12-09 05:20:54.963819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.526 [2024-12-09 05:20:54.963826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.526 [2024-12-09 05:20:54.963832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.526 [2024-12-09 05:20:54.963847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-12-09 05:20:54.973789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.526 [2024-12-09 05:20:54.973844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.526 [2024-12-09 05:20:54.973858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.526 [2024-12-09 05:20:54.973866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.526 [2024-12-09 05:20:54.973872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.526 [2024-12-09 05:20:54.973888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-12-09 05:20:54.983777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.526 [2024-12-09 05:20:54.983838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.526 [2024-12-09 05:20:54.983853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.526 [2024-12-09 05:20:54.983860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.526 [2024-12-09 05:20:54.983866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.526 [2024-12-09 05:20:54.983881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-12-09 05:20:54.993809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.526 [2024-12-09 05:20:54.993871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.526 [2024-12-09 05:20:54.993887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.526 [2024-12-09 05:20:54.993895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.526 [2024-12-09 05:20:54.993902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.526 [2024-12-09 05:20:54.993917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-12-09 05:20:55.003843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.526 [2024-12-09 05:20:55.003905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.526 [2024-12-09 05:20:55.003920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.526 [2024-12-09 05:20:55.003930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.526 [2024-12-09 05:20:55.003937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.526 [2024-12-09 05:20:55.003953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-12-09 05:20:55.013876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.526 [2024-12-09 05:20:55.013941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.526 [2024-12-09 05:20:55.013955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.526 [2024-12-09 05:20:55.013964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.526 [2024-12-09 05:20:55.013970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.526 [2024-12-09 05:20:55.013985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-12-09 05:20:55.023905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.526 [2024-12-09 05:20:55.023991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.526 [2024-12-09 05:20:55.024010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.526 [2024-12-09 05:20:55.024018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.526 [2024-12-09 05:20:55.024024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.526 [2024-12-09 05:20:55.024039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-12-09 05:20:55.033924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.526 [2024-12-09 05:20:55.033982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.526 [2024-12-09 05:20:55.033996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.526 [2024-12-09 05:20:55.034008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.526 [2024-12-09 05:20:55.034014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.526 [2024-12-09 05:20:55.034031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-12-09 05:20:55.043948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.526 [2024-12-09 05:20:55.044054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.526 [2024-12-09 05:20:55.044070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.526 [2024-12-09 05:20:55.044077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.526 [2024-12-09 05:20:55.044084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.526 [2024-12-09 05:20:55.044104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-12-09 05:20:55.053932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.526 [2024-12-09 05:20:55.054013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.526 [2024-12-09 05:20:55.054028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.526 [2024-12-09 05:20:55.054036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.526 [2024-12-09 05:20:55.054042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.526 [2024-12-09 05:20:55.054058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-12-09 05:20:55.063932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.526 [2024-12-09 05:20:55.063992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.526 [2024-12-09 05:20:55.064010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.526 [2024-12-09 05:20:55.064017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.526 [2024-12-09 05:20:55.064023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.526 [2024-12-09 05:20:55.064039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-12-09 05:20:55.074043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.526 [2024-12-09 05:20:55.074107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.526 [2024-12-09 05:20:55.074122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.526 [2024-12-09 05:20:55.074129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.526 [2024-12-09 05:20:55.074136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.526 [2024-12-09 05:20:55.074151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-12-09 05:20:55.084096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.527 [2024-12-09 05:20:55.084163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.527 [2024-12-09 05:20:55.084178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.527 [2024-12-09 05:20:55.084186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.527 [2024-12-09 05:20:55.084192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.527 [2024-12-09 05:20:55.084207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-12-09 05:20:55.094030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.527 [2024-12-09 05:20:55.094095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.527 [2024-12-09 05:20:55.094110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.527 [2024-12-09 05:20:55.094118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.527 [2024-12-09 05:20:55.094124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.527 [2024-12-09 05:20:55.094140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-12-09 05:20:55.104113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.527 [2024-12-09 05:20:55.104177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.527 [2024-12-09 05:20:55.104192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.527 [2024-12-09 05:20:55.104200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.527 [2024-12-09 05:20:55.104206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.527 [2024-12-09 05:20:55.104222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-12-09 05:20:55.114168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.527 [2024-12-09 05:20:55.114233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.527 [2024-12-09 05:20:55.114247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.527 [2024-12-09 05:20:55.114255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.527 [2024-12-09 05:20:55.114262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.527 [2024-12-09 05:20:55.114277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-12-09 05:20:55.124231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.527 [2024-12-09 05:20:55.124301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.527 [2024-12-09 05:20:55.124336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.527 [2024-12-09 05:20:55.124344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.527 [2024-12-09 05:20:55.124351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.527 [2024-12-09 05:20:55.124375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-12-09 05:20:55.134194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.527 [2024-12-09 05:20:55.134300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.527 [2024-12-09 05:20:55.134318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.527 [2024-12-09 05:20:55.134326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.527 [2024-12-09 05:20:55.134333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.527 [2024-12-09 05:20:55.134350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-12-09 05:20:55.144228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.527 [2024-12-09 05:20:55.144291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.527 [2024-12-09 05:20:55.144308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.527 [2024-12-09 05:20:55.144317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.527 [2024-12-09 05:20:55.144324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.527 [2024-12-09 05:20:55.144340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-12-09 05:20:55.154297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.527 [2024-12-09 05:20:55.154378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.527 [2024-12-09 05:20:55.154393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.527 [2024-12-09 05:20:55.154400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.527 [2024-12-09 05:20:55.154406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.527 [2024-12-09 05:20:55.154422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-12-09 05:20:55.164343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.527 [2024-12-09 05:20:55.164422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.527 [2024-12-09 05:20:55.164439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.527 [2024-12-09 05:20:55.164446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.527 [2024-12-09 05:20:55.164453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.527 [2024-12-09 05:20:55.164469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.786 [2024-12-09 05:20:55.174335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.786 [2024-12-09 05:20:55.174404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.786 [2024-12-09 05:20:55.174419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.786 [2024-12-09 05:20:55.174426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.786 [2024-12-09 05:20:55.174435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.786 [2024-12-09 05:20:55.174451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-12-09 05:20:55.184356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.786 [2024-12-09 05:20:55.184419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.786 [2024-12-09 05:20:55.184434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.786 [2024-12-09 05:20:55.184441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.786 [2024-12-09 05:20:55.184448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.786 [2024-12-09 05:20:55.184464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-12-09 05:20:55.194380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.786 [2024-12-09 05:20:55.194443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.786 [2024-12-09 05:20:55.194459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.786 [2024-12-09 05:20:55.194466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.786 [2024-12-09 05:20:55.194473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.786 [2024-12-09 05:20:55.194488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-12-09 05:20:55.204429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.786 [2024-12-09 05:20:55.204494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.786 [2024-12-09 05:20:55.204510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.786 [2024-12-09 05:20:55.204517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.786 [2024-12-09 05:20:55.204523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.786 [2024-12-09 05:20:55.204540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-12-09 05:20:55.214482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.786 [2024-12-09 05:20:55.214552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.786 [2024-12-09 05:20:55.214567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.786 [2024-12-09 05:20:55.214574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.786 [2024-12-09 05:20:55.214581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.786 [2024-12-09 05:20:55.214596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-12-09 05:20:55.224470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.786 [2024-12-09 05:20:55.224530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.786 [2024-12-09 05:20:55.224544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.786 [2024-12-09 05:20:55.224552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.786 [2024-12-09 05:20:55.224558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.786 [2024-12-09 05:20:55.224574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.786 qpair failed and we were unable to recover it. 00:26:18.786 [2024-12-09 05:20:55.234514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.786 [2024-12-09 05:20:55.234587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.786 [2024-12-09 05:20:55.234602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.786 [2024-12-09 05:20:55.234609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.787 [2024-12-09 05:20:55.234615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.787 [2024-12-09 05:20:55.234630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-12-09 05:20:55.244516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.787 [2024-12-09 05:20:55.244574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.787 [2024-12-09 05:20:55.244589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.787 [2024-12-09 05:20:55.244596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.787 [2024-12-09 05:20:55.244603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.787 [2024-12-09 05:20:55.244618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-12-09 05:20:55.254609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.787 [2024-12-09 05:20:55.254677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.787 [2024-12-09 05:20:55.254692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.787 [2024-12-09 05:20:55.254699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.787 [2024-12-09 05:20:55.254707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.787 [2024-12-09 05:20:55.254722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-12-09 05:20:55.264582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.787 [2024-12-09 05:20:55.264640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.787 [2024-12-09 05:20:55.264659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.787 [2024-12-09 05:20:55.264668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.787 [2024-12-09 05:20:55.264675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.787 [2024-12-09 05:20:55.264690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-12-09 05:20:55.274609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.787 [2024-12-09 05:20:55.274669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.787 [2024-12-09 05:20:55.274684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.787 [2024-12-09 05:20:55.274692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.787 [2024-12-09 05:20:55.274698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.787 [2024-12-09 05:20:55.274713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-12-09 05:20:55.284656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.787 [2024-12-09 05:20:55.284717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.787 [2024-12-09 05:20:55.284731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.787 [2024-12-09 05:20:55.284739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.787 [2024-12-09 05:20:55.284745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.787 [2024-12-09 05:20:55.284760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-12-09 05:20:55.294684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.787 [2024-12-09 05:20:55.294743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.787 [2024-12-09 05:20:55.294758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.787 [2024-12-09 05:20:55.294766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.787 [2024-12-09 05:20:55.294772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.787 [2024-12-09 05:20:55.294788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-12-09 05:20:55.304706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.787 [2024-12-09 05:20:55.304787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.787 [2024-12-09 05:20:55.304802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.787 [2024-12-09 05:20:55.304810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.787 [2024-12-09 05:20:55.304819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.787 [2024-12-09 05:20:55.304835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-12-09 05:20:55.314758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.787 [2024-12-09 05:20:55.314826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.787 [2024-12-09 05:20:55.314840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.787 [2024-12-09 05:20:55.314848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.787 [2024-12-09 05:20:55.314854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.787 [2024-12-09 05:20:55.314870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-12-09 05:20:55.324763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.787 [2024-12-09 05:20:55.324825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.787 [2024-12-09 05:20:55.324840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.787 [2024-12-09 05:20:55.324847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.787 [2024-12-09 05:20:55.324854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.787 [2024-12-09 05:20:55.324869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-12-09 05:20:55.334796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.787 [2024-12-09 05:20:55.334865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.787 [2024-12-09 05:20:55.334880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.787 [2024-12-09 05:20:55.334887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.787 [2024-12-09 05:20:55.334894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.787 [2024-12-09 05:20:55.334909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-12-09 05:20:55.344842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.787 [2024-12-09 05:20:55.344913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.787 [2024-12-09 05:20:55.344928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.787 [2024-12-09 05:20:55.344935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.787 [2024-12-09 05:20:55.344942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.787 [2024-12-09 05:20:55.344957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-12-09 05:20:55.354839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.787 [2024-12-09 05:20:55.354904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.787 [2024-12-09 05:20:55.354918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.787 [2024-12-09 05:20:55.354925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.787 [2024-12-09 05:20:55.354932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.787 [2024-12-09 05:20:55.354947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.787 qpair failed and we were unable to recover it. 00:26:18.787 [2024-12-09 05:20:55.364898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.787 [2024-12-09 05:20:55.364972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.787 [2024-12-09 05:20:55.364987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.788 [2024-12-09 05:20:55.364994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.788 [2024-12-09 05:20:55.365005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.788 [2024-12-09 05:20:55.365021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-12-09 05:20:55.374917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.788 [2024-12-09 05:20:55.374988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.788 [2024-12-09 05:20:55.375006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.788 [2024-12-09 05:20:55.375014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.788 [2024-12-09 05:20:55.375020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.788 [2024-12-09 05:20:55.375036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-12-09 05:20:55.384932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.788 [2024-12-09 05:20:55.385018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.788 [2024-12-09 05:20:55.385033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.788 [2024-12-09 05:20:55.385041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.788 [2024-12-09 05:20:55.385047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.788 [2024-12-09 05:20:55.385062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-12-09 05:20:55.394979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.788 [2024-12-09 05:20:55.395059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.788 [2024-12-09 05:20:55.395081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.788 [2024-12-09 05:20:55.395091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.788 [2024-12-09 05:20:55.395099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.788 [2024-12-09 05:20:55.395117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-12-09 05:20:55.405037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.788 [2024-12-09 05:20:55.405099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.788 [2024-12-09 05:20:55.405114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.788 [2024-12-09 05:20:55.405122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.788 [2024-12-09 05:20:55.405129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.788 [2024-12-09 05:20:55.405146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-12-09 05:20:55.414962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.788 [2024-12-09 05:20:55.415033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.788 [2024-12-09 05:20:55.415048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.788 [2024-12-09 05:20:55.415056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.788 [2024-12-09 05:20:55.415063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.788 [2024-12-09 05:20:55.415079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.788 qpair failed and we were unable to recover it. 00:26:18.788 [2024-12-09 05:20:55.425057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.788 [2024-12-09 05:20:55.425120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.788 [2024-12-09 05:20:55.425135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.788 [2024-12-09 05:20:55.425143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.788 [2024-12-09 05:20:55.425150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:18.788 [2024-12-09 05:20:55.425166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.788 qpair failed and we were unable to recover it. 00:26:19.047 [2024-12-09 05:20:55.435147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.047 [2024-12-09 05:20:55.435253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.047 [2024-12-09 05:20:55.435269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.047 [2024-12-09 05:20:55.435279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.047 [2024-12-09 05:20:55.435286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.048 [2024-12-09 05:20:55.435301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-12-09 05:20:55.445128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.048 [2024-12-09 05:20:55.445189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.048 [2024-12-09 05:20:55.445204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.048 [2024-12-09 05:20:55.445212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.048 [2024-12-09 05:20:55.445218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.048 [2024-12-09 05:20:55.445233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-12-09 05:20:55.455131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.048 [2024-12-09 05:20:55.455192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.048 [2024-12-09 05:20:55.455207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.048 [2024-12-09 05:20:55.455214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.048 [2024-12-09 05:20:55.455220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.048 [2024-12-09 05:20:55.455237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-12-09 05:20:55.465203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.048 [2024-12-09 05:20:55.465266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.048 [2024-12-09 05:20:55.465281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.048 [2024-12-09 05:20:55.465288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.048 [2024-12-09 05:20:55.465294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.048 [2024-12-09 05:20:55.465309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-12-09 05:20:55.475198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.048 [2024-12-09 05:20:55.475263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.048 [2024-12-09 05:20:55.475277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.048 [2024-12-09 05:20:55.475285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.048 [2024-12-09 05:20:55.475291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.048 [2024-12-09 05:20:55.475307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-12-09 05:20:55.485235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.048 [2024-12-09 05:20:55.485295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.048 [2024-12-09 05:20:55.485310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.048 [2024-12-09 05:20:55.485317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.048 [2024-12-09 05:20:55.485324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.048 [2024-12-09 05:20:55.485339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-12-09 05:20:55.495321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.048 [2024-12-09 05:20:55.495432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.048 [2024-12-09 05:20:55.495450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.048 [2024-12-09 05:20:55.495458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.048 [2024-12-09 05:20:55.495464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.048 [2024-12-09 05:20:55.495480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-12-09 05:20:55.505360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.048 [2024-12-09 05:20:55.505444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.048 [2024-12-09 05:20:55.505459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.048 [2024-12-09 05:20:55.505466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.048 [2024-12-09 05:20:55.505472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.048 [2024-12-09 05:20:55.505487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-12-09 05:20:55.515361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.048 [2024-12-09 05:20:55.515424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.048 [2024-12-09 05:20:55.515439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.048 [2024-12-09 05:20:55.515446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.048 [2024-12-09 05:20:55.515453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.048 [2024-12-09 05:20:55.515468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-12-09 05:20:55.525345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.048 [2024-12-09 05:20:55.525411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.048 [2024-12-09 05:20:55.525426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.048 [2024-12-09 05:20:55.525433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.048 [2024-12-09 05:20:55.525440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.048 [2024-12-09 05:20:55.525456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-12-09 05:20:55.535366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.048 [2024-12-09 05:20:55.535428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.048 [2024-12-09 05:20:55.535442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.048 [2024-12-09 05:20:55.535449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.048 [2024-12-09 05:20:55.535455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.048 [2024-12-09 05:20:55.535471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-12-09 05:20:55.545426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.048 [2024-12-09 05:20:55.545485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.048 [2024-12-09 05:20:55.545500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.048 [2024-12-09 05:20:55.545508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.048 [2024-12-09 05:20:55.545514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.048 [2024-12-09 05:20:55.545529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-12-09 05:20:55.555439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.048 [2024-12-09 05:20:55.555500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.048 [2024-12-09 05:20:55.555516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.048 [2024-12-09 05:20:55.555523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.048 [2024-12-09 05:20:55.555529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.048 [2024-12-09 05:20:55.555544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-12-09 05:20:55.565502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.048 [2024-12-09 05:20:55.565567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.048 [2024-12-09 05:20:55.565582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.048 [2024-12-09 05:20:55.565593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.049 [2024-12-09 05:20:55.565599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.049 [2024-12-09 05:20:55.565615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-12-09 05:20:55.575500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.049 [2024-12-09 05:20:55.575562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.049 [2024-12-09 05:20:55.575578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.049 [2024-12-09 05:20:55.575586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.049 [2024-12-09 05:20:55.575592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.049 [2024-12-09 05:20:55.575608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-12-09 05:20:55.585505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.049 [2024-12-09 05:20:55.585566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.049 [2024-12-09 05:20:55.585581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.049 [2024-12-09 05:20:55.585588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.049 [2024-12-09 05:20:55.585594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.049 [2024-12-09 05:20:55.585610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-12-09 05:20:55.595614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.049 [2024-12-09 05:20:55.595681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.049 [2024-12-09 05:20:55.595699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.049 [2024-12-09 05:20:55.595708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.049 [2024-12-09 05:20:55.595715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.049 [2024-12-09 05:20:55.595732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-12-09 05:20:55.605579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.049 [2024-12-09 05:20:55.605644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.049 [2024-12-09 05:20:55.605659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.049 [2024-12-09 05:20:55.605666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.049 [2024-12-09 05:20:55.605672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.049 [2024-12-09 05:20:55.605691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-12-09 05:20:55.615594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.049 [2024-12-09 05:20:55.615654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.049 [2024-12-09 05:20:55.615668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.049 [2024-12-09 05:20:55.615676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.049 [2024-12-09 05:20:55.615682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.049 [2024-12-09 05:20:55.615698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-12-09 05:20:55.625627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.049 [2024-12-09 05:20:55.625730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.049 [2024-12-09 05:20:55.625745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.049 [2024-12-09 05:20:55.625752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.049 [2024-12-09 05:20:55.625759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.049 [2024-12-09 05:20:55.625775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-12-09 05:20:55.635716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.049 [2024-12-09 05:20:55.635779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.049 [2024-12-09 05:20:55.635793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.049 [2024-12-09 05:20:55.635801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.049 [2024-12-09 05:20:55.635807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.049 [2024-12-09 05:20:55.635823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-12-09 05:20:55.645646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.049 [2024-12-09 05:20:55.645704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.049 [2024-12-09 05:20:55.645719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.049 [2024-12-09 05:20:55.645727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.049 [2024-12-09 05:20:55.645733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.049 [2024-12-09 05:20:55.645749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-12-09 05:20:55.655716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.049 [2024-12-09 05:20:55.655785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.049 [2024-12-09 05:20:55.655800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.049 [2024-12-09 05:20:55.655808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.049 [2024-12-09 05:20:55.655814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.049 [2024-12-09 05:20:55.655830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-12-09 05:20:55.665762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.049 [2024-12-09 05:20:55.665825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.049 [2024-12-09 05:20:55.665840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.049 [2024-12-09 05:20:55.665847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.049 [2024-12-09 05:20:55.665854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.049 [2024-12-09 05:20:55.665869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-12-09 05:20:55.675821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.049 [2024-12-09 05:20:55.675899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.049 [2024-12-09 05:20:55.675914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.049 [2024-12-09 05:20:55.675921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.049 [2024-12-09 05:20:55.675928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.049 [2024-12-09 05:20:55.675943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-12-09 05:20:55.685805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.049 [2024-12-09 05:20:55.685874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.049 [2024-12-09 05:20:55.685891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.049 [2024-12-09 05:20:55.685898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.049 [2024-12-09 05:20:55.685905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.049 [2024-12-09 05:20:55.685923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.310 [2024-12-09 05:20:55.695850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.310 [2024-12-09 05:20:55.695918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.310 [2024-12-09 05:20:55.695937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.310 [2024-12-09 05:20:55.695944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.310 [2024-12-09 05:20:55.695951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.310 [2024-12-09 05:20:55.695966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.310 qpair failed and we were unable to recover it. 00:26:19.310 [2024-12-09 05:20:55.705885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.310 [2024-12-09 05:20:55.705982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.310 [2024-12-09 05:20:55.706001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.310 [2024-12-09 05:20:55.706009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.310 [2024-12-09 05:20:55.706015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.310 [2024-12-09 05:20:55.706030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.310 qpair failed and we were unable to recover it. 00:26:19.310 [2024-12-09 05:20:55.715880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.310 [2024-12-09 05:20:55.715944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.310 [2024-12-09 05:20:55.715959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.310 [2024-12-09 05:20:55.715966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.310 [2024-12-09 05:20:55.715972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.310 [2024-12-09 05:20:55.715987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.310 qpair failed and we were unable to recover it. 00:26:19.310 [2024-12-09 05:20:55.725920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.310 [2024-12-09 05:20:55.725985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.310 [2024-12-09 05:20:55.726004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.310 [2024-12-09 05:20:55.726012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.310 [2024-12-09 05:20:55.726019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.310 [2024-12-09 05:20:55.726034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.310 qpair failed and we were unable to recover it. 00:26:19.310 [2024-12-09 05:20:55.735923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.310 [2024-12-09 05:20:55.735987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.310 [2024-12-09 05:20:55.736005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.310 [2024-12-09 05:20:55.736013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.310 [2024-12-09 05:20:55.736025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.310 [2024-12-09 05:20:55.736041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.310 qpair failed and we were unable to recover it. 00:26:19.310 [2024-12-09 05:20:55.745982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.310 [2024-12-09 05:20:55.746082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.310 [2024-12-09 05:20:55.746099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.310 [2024-12-09 05:20:55.746108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.310 [2024-12-09 05:20:55.746115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.310 [2024-12-09 05:20:55.746132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.310 qpair failed and we were unable to recover it. 00:26:19.310 [2024-12-09 05:20:55.756015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.310 [2024-12-09 05:20:55.756088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.310 [2024-12-09 05:20:55.756102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.310 [2024-12-09 05:20:55.756110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.310 [2024-12-09 05:20:55.756116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.310 [2024-12-09 05:20:55.756132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.310 qpair failed and we were unable to recover it. 00:26:19.310 [2024-12-09 05:20:55.766066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.310 [2024-12-09 05:20:55.766169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.310 [2024-12-09 05:20:55.766184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.310 [2024-12-09 05:20:55.766191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.310 [2024-12-09 05:20:55.766198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.311 [2024-12-09 05:20:55.766213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.311 qpair failed and we were unable to recover it. 00:26:19.311 [2024-12-09 05:20:55.776062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.311 [2024-12-09 05:20:55.776125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.311 [2024-12-09 05:20:55.776140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.311 [2024-12-09 05:20:55.776148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.311 [2024-12-09 05:20:55.776154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.311 [2024-12-09 05:20:55.776171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.311 qpair failed and we were unable to recover it. 00:26:19.311 [2024-12-09 05:20:55.786074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.311 [2024-12-09 05:20:55.786139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.311 [2024-12-09 05:20:55.786154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.311 [2024-12-09 05:20:55.786161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.311 [2024-12-09 05:20:55.786167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.311 [2024-12-09 05:20:55.786182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.311 qpair failed and we were unable to recover it. 00:26:19.311 [2024-12-09 05:20:55.796078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.311 [2024-12-09 05:20:55.796144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.311 [2024-12-09 05:20:55.796160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.311 [2024-12-09 05:20:55.796168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.311 [2024-12-09 05:20:55.796174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.311 [2024-12-09 05:20:55.796190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.311 qpair failed and we were unable to recover it. 00:26:19.311 [2024-12-09 05:20:55.806178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.311 [2024-12-09 05:20:55.806243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.311 [2024-12-09 05:20:55.806259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.311 [2024-12-09 05:20:55.806266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.311 [2024-12-09 05:20:55.806272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.311 [2024-12-09 05:20:55.806287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.311 qpair failed and we were unable to recover it. 00:26:19.311 [2024-12-09 05:20:55.816178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.311 [2024-12-09 05:20:55.816239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.311 [2024-12-09 05:20:55.816253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.311 [2024-12-09 05:20:55.816261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.311 [2024-12-09 05:20:55.816268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.311 [2024-12-09 05:20:55.816284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.311 qpair failed and we were unable to recover it. 00:26:19.311 [2024-12-09 05:20:55.826145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.311 [2024-12-09 05:20:55.826207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.311 [2024-12-09 05:20:55.826225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.311 [2024-12-09 05:20:55.826232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.311 [2024-12-09 05:20:55.826239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.311 [2024-12-09 05:20:55.826254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.311 qpair failed and we were unable to recover it. 00:26:19.311 [2024-12-09 05:20:55.836266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.311 [2024-12-09 05:20:55.836340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.311 [2024-12-09 05:20:55.836355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.311 [2024-12-09 05:20:55.836363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.311 [2024-12-09 05:20:55.836370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.311 [2024-12-09 05:20:55.836385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.311 qpair failed and we were unable to recover it. 00:26:19.311 [2024-12-09 05:20:55.846270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.311 [2024-12-09 05:20:55.846334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.311 [2024-12-09 05:20:55.846349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.311 [2024-12-09 05:20:55.846357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.311 [2024-12-09 05:20:55.846363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.311 [2024-12-09 05:20:55.846379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.311 qpair failed and we were unable to recover it. 00:26:19.311 [2024-12-09 05:20:55.856290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.311 [2024-12-09 05:20:55.856372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.311 [2024-12-09 05:20:55.856387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.311 [2024-12-09 05:20:55.856394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.311 [2024-12-09 05:20:55.856400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.311 [2024-12-09 05:20:55.856415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.311 qpair failed and we were unable to recover it. 00:26:19.311 [2024-12-09 05:20:55.866267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.311 [2024-12-09 05:20:55.866354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.311 [2024-12-09 05:20:55.866370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.311 [2024-12-09 05:20:55.866377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.311 [2024-12-09 05:20:55.866387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.311 [2024-12-09 05:20:55.866403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.311 qpair failed and we were unable to recover it. 00:26:19.311 [2024-12-09 05:20:55.876348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.311 [2024-12-09 05:20:55.876416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.311 [2024-12-09 05:20:55.876431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.311 [2024-12-09 05:20:55.876438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.311 [2024-12-09 05:20:55.876445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.311 [2024-12-09 05:20:55.876461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.311 qpair failed and we were unable to recover it. 00:26:19.311 [2024-12-09 05:20:55.886420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.311 [2024-12-09 05:20:55.886515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.311 [2024-12-09 05:20:55.886532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.311 [2024-12-09 05:20:55.886542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.311 [2024-12-09 05:20:55.886552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.311 [2024-12-09 05:20:55.886570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.311 qpair failed and we were unable to recover it. 00:26:19.311 [2024-12-09 05:20:55.896468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.311 [2024-12-09 05:20:55.896545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.311 [2024-12-09 05:20:55.896562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.311 [2024-12-09 05:20:55.896569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.311 [2024-12-09 05:20:55.896576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.311 [2024-12-09 05:20:55.896592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.312 qpair failed and we were unable to recover it. 00:26:19.312 [2024-12-09 05:20:55.906410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.312 [2024-12-09 05:20:55.906498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.312 [2024-12-09 05:20:55.906512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.312 [2024-12-09 05:20:55.906519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.312 [2024-12-09 05:20:55.906526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.312 [2024-12-09 05:20:55.906541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.312 qpair failed and we were unable to recover it. 00:26:19.312 [2024-12-09 05:20:55.916464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.312 [2024-12-09 05:20:55.916525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.312 [2024-12-09 05:20:55.916540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.312 [2024-12-09 05:20:55.916548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.312 [2024-12-09 05:20:55.916554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.312 [2024-12-09 05:20:55.916571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.312 qpair failed and we were unable to recover it. 00:26:19.312 [2024-12-09 05:20:55.926477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.312 [2024-12-09 05:20:55.926539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.312 [2024-12-09 05:20:55.926556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.312 [2024-12-09 05:20:55.926563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.312 [2024-12-09 05:20:55.926570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.312 [2024-12-09 05:20:55.926586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.312 qpair failed and we were unable to recover it. 00:26:19.312 [2024-12-09 05:20:55.936518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.312 [2024-12-09 05:20:55.936578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.312 [2024-12-09 05:20:55.936593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.312 [2024-12-09 05:20:55.936601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.312 [2024-12-09 05:20:55.936607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.312 [2024-12-09 05:20:55.936623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.312 qpair failed and we were unable to recover it. 00:26:19.312 [2024-12-09 05:20:55.946491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.312 [2024-12-09 05:20:55.946553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.312 [2024-12-09 05:20:55.946569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.312 [2024-12-09 05:20:55.946577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.312 [2024-12-09 05:20:55.946583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.312 [2024-12-09 05:20:55.946599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.312 qpair failed and we were unable to recover it. 00:26:19.573 [2024-12-09 05:20:55.956526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.573 [2024-12-09 05:20:55.956590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.573 [2024-12-09 05:20:55.956608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.573 [2024-12-09 05:20:55.956616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.573 [2024-12-09 05:20:55.956622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.573 [2024-12-09 05:20:55.956637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.573 qpair failed and we were unable to recover it. 00:26:19.573 [2024-12-09 05:20:55.966616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.573 [2024-12-09 05:20:55.966680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.573 [2024-12-09 05:20:55.966694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.573 [2024-12-09 05:20:55.966701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.573 [2024-12-09 05:20:55.966708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.573 [2024-12-09 05:20:55.966724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.573 qpair failed and we were unable to recover it. 00:26:19.573 [2024-12-09 05:20:55.976630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.573 [2024-12-09 05:20:55.976691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.573 [2024-12-09 05:20:55.976706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.573 [2024-12-09 05:20:55.976714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.573 [2024-12-09 05:20:55.976720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.573 [2024-12-09 05:20:55.976735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.573 qpair failed and we were unable to recover it. 00:26:19.573 [2024-12-09 05:20:55.986724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.573 [2024-12-09 05:20:55.986814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.573 [2024-12-09 05:20:55.986829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.573 [2024-12-09 05:20:55.986836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.573 [2024-12-09 05:20:55.986842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.573 [2024-12-09 05:20:55.986857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.573 qpair failed and we were unable to recover it. 00:26:19.573 [2024-12-09 05:20:55.996723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.573 [2024-12-09 05:20:55.996787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.573 [2024-12-09 05:20:55.996802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.573 [2024-12-09 05:20:55.996813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.573 [2024-12-09 05:20:55.996820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.573 [2024-12-09 05:20:55.996835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.573 qpair failed and we were unable to recover it. 00:26:19.573 [2024-12-09 05:20:56.006732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.573 [2024-12-09 05:20:56.006798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.573 [2024-12-09 05:20:56.006814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.573 [2024-12-09 05:20:56.006822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.573 [2024-12-09 05:20:56.006829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.573 [2024-12-09 05:20:56.006844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.573 qpair failed and we were unable to recover it. 00:26:19.573 [2024-12-09 05:20:56.016766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.573 [2024-12-09 05:20:56.016831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.573 [2024-12-09 05:20:56.016846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.573 [2024-12-09 05:20:56.016854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.573 [2024-12-09 05:20:56.016860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.573 [2024-12-09 05:20:56.016875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.573 qpair failed and we were unable to recover it. 00:26:19.573 [2024-12-09 05:20:56.026816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.573 [2024-12-09 05:20:56.026882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.574 [2024-12-09 05:20:56.026897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.574 [2024-12-09 05:20:56.026905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.574 [2024-12-09 05:20:56.026911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.574 [2024-12-09 05:20:56.026927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.574 qpair failed and we were unable to recover it. 00:26:19.574 [2024-12-09 05:20:56.036758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.574 [2024-12-09 05:20:56.036822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.574 [2024-12-09 05:20:56.036836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.574 [2024-12-09 05:20:56.036844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.574 [2024-12-09 05:20:56.036850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.574 [2024-12-09 05:20:56.036865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.574 qpair failed and we were unable to recover it. 00:26:19.574 [2024-12-09 05:20:56.046846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.574 [2024-12-09 05:20:56.046909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.574 [2024-12-09 05:20:56.046924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.574 [2024-12-09 05:20:56.046931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.574 [2024-12-09 05:20:56.046937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.574 [2024-12-09 05:20:56.046953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.574 qpair failed and we were unable to recover it. 00:26:19.574 [2024-12-09 05:20:56.056897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.574 [2024-12-09 05:20:56.056986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.574 [2024-12-09 05:20:56.057005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.574 [2024-12-09 05:20:56.057014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.574 [2024-12-09 05:20:56.057020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.574 [2024-12-09 05:20:56.057036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.574 qpair failed and we were unable to recover it. 00:26:19.574 [2024-12-09 05:20:56.066899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.574 [2024-12-09 05:20:56.066961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.574 [2024-12-09 05:20:56.066976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.574 [2024-12-09 05:20:56.066983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.574 [2024-12-09 05:20:56.066990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.574 [2024-12-09 05:20:56.067010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.574 qpair failed and we were unable to recover it. 00:26:19.574 [2024-12-09 05:20:56.076991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.574 [2024-12-09 05:20:56.077067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.574 [2024-12-09 05:20:56.077084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.574 [2024-12-09 05:20:56.077091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.574 [2024-12-09 05:20:56.077098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.574 [2024-12-09 05:20:56.077114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.574 qpair failed and we were unable to recover it. 00:26:19.574 [2024-12-09 05:20:56.086956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.574 [2024-12-09 05:20:56.087022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.574 [2024-12-09 05:20:56.087037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.574 [2024-12-09 05:20:56.087045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.574 [2024-12-09 05:20:56.087052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.574 [2024-12-09 05:20:56.087068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.574 qpair failed and we were unable to recover it. 00:26:19.574 [2024-12-09 05:20:56.096973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.574 [2024-12-09 05:20:56.097036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.574 [2024-12-09 05:20:56.097054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.574 [2024-12-09 05:20:56.097062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.574 [2024-12-09 05:20:56.097068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.574 [2024-12-09 05:20:56.097085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.574 qpair failed and we were unable to recover it. 00:26:19.574 [2024-12-09 05:20:56.106993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.574 [2024-12-09 05:20:56.107060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.574 [2024-12-09 05:20:56.107076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.574 [2024-12-09 05:20:56.107083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.574 [2024-12-09 05:20:56.107090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.574 [2024-12-09 05:20:56.107106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.574 qpair failed and we were unable to recover it. 00:26:19.574 [2024-12-09 05:20:56.117089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.574 [2024-12-09 05:20:56.117149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.574 [2024-12-09 05:20:56.117163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.574 [2024-12-09 05:20:56.117171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.574 [2024-12-09 05:20:56.117177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.574 [2024-12-09 05:20:56.117192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.575 qpair failed and we were unable to recover it. 00:26:19.575 [2024-12-09 05:20:56.127065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.575 [2024-12-09 05:20:56.127136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.575 [2024-12-09 05:20:56.127152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.575 [2024-12-09 05:20:56.127163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.575 [2024-12-09 05:20:56.127171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.575 [2024-12-09 05:20:56.127186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.575 qpair failed and we were unable to recover it. 00:26:19.575 [2024-12-09 05:20:56.137093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.575 [2024-12-09 05:20:56.137158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.575 [2024-12-09 05:20:56.137173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.575 [2024-12-09 05:20:56.137181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.575 [2024-12-09 05:20:56.137188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.575 [2024-12-09 05:20:56.137204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.575 qpair failed and we were unable to recover it. 00:26:19.575 [2024-12-09 05:20:56.147084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.575 [2024-12-09 05:20:56.147173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.575 [2024-12-09 05:20:56.147188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.575 [2024-12-09 05:20:56.147194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.575 [2024-12-09 05:20:56.147201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.575 [2024-12-09 05:20:56.147217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.575 qpair failed and we were unable to recover it. 00:26:19.575 [2024-12-09 05:20:56.157152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.575 [2024-12-09 05:20:56.157216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.575 [2024-12-09 05:20:56.157231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.575 [2024-12-09 05:20:56.157238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.575 [2024-12-09 05:20:56.157244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.575 [2024-12-09 05:20:56.157259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.575 qpair failed and we were unable to recover it. 00:26:19.575 [2024-12-09 05:20:56.167184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.575 [2024-12-09 05:20:56.167247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.575 [2024-12-09 05:20:56.167262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.575 [2024-12-09 05:20:56.167270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.575 [2024-12-09 05:20:56.167276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.575 [2024-12-09 05:20:56.167294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.575 qpair failed and we were unable to recover it. 00:26:19.575 [2024-12-09 05:20:56.177203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.575 [2024-12-09 05:20:56.177265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.575 [2024-12-09 05:20:56.177280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.575 [2024-12-09 05:20:56.177288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.575 [2024-12-09 05:20:56.177294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.575 [2024-12-09 05:20:56.177309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.575 qpair failed and we were unable to recover it. 00:26:19.575 [2024-12-09 05:20:56.187238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.575 [2024-12-09 05:20:56.187313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.575 [2024-12-09 05:20:56.187328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.575 [2024-12-09 05:20:56.187335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.575 [2024-12-09 05:20:56.187342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.575 [2024-12-09 05:20:56.187358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.575 qpair failed and we were unable to recover it. 00:26:19.575 [2024-12-09 05:20:56.197287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.575 [2024-12-09 05:20:56.197351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.575 [2024-12-09 05:20:56.197367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.575 [2024-12-09 05:20:56.197375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.575 [2024-12-09 05:20:56.197382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.575 [2024-12-09 05:20:56.197397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.575 qpair failed and we were unable to recover it. 00:26:19.575 [2024-12-09 05:20:56.207294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.575 [2024-12-09 05:20:56.207379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.575 [2024-12-09 05:20:56.207394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.575 [2024-12-09 05:20:56.207402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.575 [2024-12-09 05:20:56.207408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.575 [2024-12-09 05:20:56.207424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.575 qpair failed and we were unable to recover it. 00:26:19.836 [2024-12-09 05:20:56.217377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.836 [2024-12-09 05:20:56.217443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.836 [2024-12-09 05:20:56.217460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.836 [2024-12-09 05:20:56.217467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.836 [2024-12-09 05:20:56.217474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.836 [2024-12-09 05:20:56.217489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.836 qpair failed and we were unable to recover it. 00:26:19.836 [2024-12-09 05:20:56.227389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.836 [2024-12-09 05:20:56.227497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.836 [2024-12-09 05:20:56.227512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.836 [2024-12-09 05:20:56.227521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.836 [2024-12-09 05:20:56.227527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.836 [2024-12-09 05:20:56.227544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.836 qpair failed and we were unable to recover it. 00:26:19.836 [2024-12-09 05:20:56.237390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.836 [2024-12-09 05:20:56.237452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.836 [2024-12-09 05:20:56.237467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.836 [2024-12-09 05:20:56.237475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.836 [2024-12-09 05:20:56.237481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.836 [2024-12-09 05:20:56.237496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.836 qpair failed and we were unable to recover it. 00:26:19.836 [2024-12-09 05:20:56.247425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.836 [2024-12-09 05:20:56.247488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.836 [2024-12-09 05:20:56.247505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.836 [2024-12-09 05:20:56.247513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.836 [2024-12-09 05:20:56.247519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.836 [2024-12-09 05:20:56.247535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.836 qpair failed and we were unable to recover it. 00:26:19.836 [2024-12-09 05:20:56.257506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.836 [2024-12-09 05:20:56.257588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.836 [2024-12-09 05:20:56.257605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.836 [2024-12-09 05:20:56.257612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.836 [2024-12-09 05:20:56.257618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.836 [2024-12-09 05:20:56.257634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.836 qpair failed and we were unable to recover it. 00:26:19.836 [2024-12-09 05:20:56.267472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.836 [2024-12-09 05:20:56.267535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.836 [2024-12-09 05:20:56.267551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.836 [2024-12-09 05:20:56.267558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.836 [2024-12-09 05:20:56.267565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.836 [2024-12-09 05:20:56.267580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.836 qpair failed and we were unable to recover it. 00:26:19.836 [2024-12-09 05:20:56.277503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.836 [2024-12-09 05:20:56.277567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.836 [2024-12-09 05:20:56.277581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.836 [2024-12-09 05:20:56.277589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.836 [2024-12-09 05:20:56.277595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.836 [2024-12-09 05:20:56.277612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.836 qpair failed and we were unable to recover it. 00:26:19.836 [2024-12-09 05:20:56.287586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.837 [2024-12-09 05:20:56.287691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.837 [2024-12-09 05:20:56.287706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.837 [2024-12-09 05:20:56.287713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.837 [2024-12-09 05:20:56.287720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.837 [2024-12-09 05:20:56.287735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.837 qpair failed and we were unable to recover it. 00:26:19.837 [2024-12-09 05:20:56.297611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.837 [2024-12-09 05:20:56.297668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.837 [2024-12-09 05:20:56.297684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.837 [2024-12-09 05:20:56.297691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.837 [2024-12-09 05:20:56.297702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.837 [2024-12-09 05:20:56.297718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.837 qpair failed and we were unable to recover it. 00:26:19.837 [2024-12-09 05:20:56.307578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.837 [2024-12-09 05:20:56.307640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.837 [2024-12-09 05:20:56.307654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.837 [2024-12-09 05:20:56.307662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.837 [2024-12-09 05:20:56.307669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.837 [2024-12-09 05:20:56.307684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.837 qpair failed and we were unable to recover it. 00:26:19.837 [2024-12-09 05:20:56.317654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.837 [2024-12-09 05:20:56.317767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.837 [2024-12-09 05:20:56.317782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.837 [2024-12-09 05:20:56.317790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.837 [2024-12-09 05:20:56.317796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.837 [2024-12-09 05:20:56.317812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.837 qpair failed and we were unable to recover it. 00:26:19.837 [2024-12-09 05:20:56.327731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.837 [2024-12-09 05:20:56.327791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.837 [2024-12-09 05:20:56.327806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.837 [2024-12-09 05:20:56.327813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.837 [2024-12-09 05:20:56.327819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.837 [2024-12-09 05:20:56.327836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.837 qpair failed and we were unable to recover it. 00:26:19.837 [2024-12-09 05:20:56.337696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.837 [2024-12-09 05:20:56.337757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.837 [2024-12-09 05:20:56.337772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.837 [2024-12-09 05:20:56.337780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.837 [2024-12-09 05:20:56.337787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.837 [2024-12-09 05:20:56.337802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.837 qpair failed and we were unable to recover it. 00:26:19.837 [2024-12-09 05:20:56.347698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.837 [2024-12-09 05:20:56.347760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.837 [2024-12-09 05:20:56.347775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.837 [2024-12-09 05:20:56.347783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.837 [2024-12-09 05:20:56.347789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.837 [2024-12-09 05:20:56.347805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.837 qpair failed and we were unable to recover it. 00:26:19.837 [2024-12-09 05:20:56.357739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.837 [2024-12-09 05:20:56.357806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.837 [2024-12-09 05:20:56.357821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.837 [2024-12-09 05:20:56.357828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.837 [2024-12-09 05:20:56.357835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.837 [2024-12-09 05:20:56.357850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.837 qpair failed and we were unable to recover it. 00:26:19.837 [2024-12-09 05:20:56.367790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.837 [2024-12-09 05:20:56.367853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.837 [2024-12-09 05:20:56.367871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.837 [2024-12-09 05:20:56.367882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.837 [2024-12-09 05:20:56.367891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.837 [2024-12-09 05:20:56.367908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.837 qpair failed and we were unable to recover it. 00:26:19.837 [2024-12-09 05:20:56.377859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.837 [2024-12-09 05:20:56.377944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.837 [2024-12-09 05:20:56.377959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.837 [2024-12-09 05:20:56.377967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.837 [2024-12-09 05:20:56.377973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.837 [2024-12-09 05:20:56.377989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.837 qpair failed and we were unable to recover it. 00:26:19.837 [2024-12-09 05:20:56.387821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.838 [2024-12-09 05:20:56.387881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.838 [2024-12-09 05:20:56.387899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.838 [2024-12-09 05:20:56.387907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.838 [2024-12-09 05:20:56.387913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.838 [2024-12-09 05:20:56.387928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.838 qpair failed and we were unable to recover it. 00:26:19.838 [2024-12-09 05:20:56.397863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.838 [2024-12-09 05:20:56.397923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.838 [2024-12-09 05:20:56.397939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.838 [2024-12-09 05:20:56.397947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.838 [2024-12-09 05:20:56.397953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96bc000b90 00:26:19.838 [2024-12-09 05:20:56.397968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:19.838 qpair failed and we were unable to recover it. 00:26:19.838 [2024-12-09 05:20:56.407865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.838 [2024-12-09 05:20:56.407940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.838 [2024-12-09 05:20:56.407967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.838 [2024-12-09 05:20:56.407979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.838 [2024-12-09 05:20:56.407989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96b0000b90 00:26:19.838 [2024-12-09 05:20:56.408018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.838 qpair failed and we were unable to recover it. 00:26:19.838 [2024-12-09 05:20:56.417940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.838 [2024-12-09 05:20:56.418031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.838 [2024-12-09 05:20:56.418046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.838 [2024-12-09 05:20:56.418054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.838 [2024-12-09 05:20:56.418060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96b0000b90 00:26:19.838 [2024-12-09 05:20:56.418077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.838 qpair failed and we were unable to recover it. 00:26:19.838 [2024-12-09 05:20:56.427926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.838 [2024-12-09 05:20:56.428008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.838 [2024-12-09 05:20:56.428024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.838 [2024-12-09 05:20:56.428032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.838 [2024-12-09 05:20:56.428041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96b0000b90 00:26:19.838 [2024-12-09 05:20:56.428057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.838 qpair failed and we were unable to recover it. 00:26:19.838 [2024-12-09 05:20:56.437975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.838 [2024-12-09 05:20:56.438050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.838 [2024-12-09 05:20:56.438072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.838 [2024-12-09 05:20:56.438081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.838 [2024-12-09 05:20:56.438088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96b4000b90 00:26:19.838 [2024-12-09 05:20:56.438106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.838 qpair failed and we were unable to recover it. 00:26:19.838 [2024-12-09 05:20:56.448016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.838 [2024-12-09 05:20:56.448091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.838 [2024-12-09 05:20:56.448106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.838 [2024-12-09 05:20:56.448113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.838 [2024-12-09 05:20:56.448119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96b4000b90 00:26:19.838 [2024-12-09 05:20:56.448135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:19.838 qpair failed and we were unable to recover it. 00:26:19.838 [2024-12-09 05:20:56.448213] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:26:19.838 A controller has encountered a failure and is being reset. 00:26:20.097 Controller properly reset. 00:26:20.097 Initializing NVMe Controllers 00:26:20.097 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:20.097 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:20.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:20.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:20.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:20.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:20.097 Initialization complete. Launching workers. 00:26:20.097 Starting thread on core 1 00:26:20.097 Starting thread on core 2 00:26:20.097 Starting thread on core 3 00:26:20.097 Starting thread on core 0 00:26:20.097 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:20.097 00:26:20.097 real 0m11.617s 00:26:20.097 user 0m21.649s 00:26:20.097 sys 0m4.560s 00:26:20.097 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:20.097 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.097 ************************************ 00:26:20.097 END TEST nvmf_target_disconnect_tc2 00:26:20.097 ************************************ 00:26:20.097 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:20.097 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:20.097 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:20.097 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:20.097 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:26:20.097 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:20.097 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:26:20.097 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:20.097 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:20.097 rmmod nvme_tcp 00:26:20.097 rmmod nvme_fabrics 00:26:20.355 rmmod nvme_keyring 00:26:20.355 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:20.355 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:26:20.355 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:26:20.355 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3738966 ']' 00:26:20.355 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3738966 00:26:20.355 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3738966 ']' 00:26:20.355 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3738966 00:26:20.355 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:26:20.355 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:20.355 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3738966 00:26:20.355 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:26:20.355 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:26:20.355 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3738966' 00:26:20.355 killing process with pid 3738966 00:26:20.355 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3738966 00:26:20.355 05:20:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3738966 00:26:20.613 05:20:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:20.613 05:20:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:20.613 05:20:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:20.613 05:20:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:26:20.613 05:20:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:20.613 05:20:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:26:20.613 05:20:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:26:20.613 05:20:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:20.614 05:20:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:20.614 05:20:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.614 05:20:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.614 05:20:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.517 05:20:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:22.517 00:26:22.517 real 0m20.086s 00:26:22.517 user 0m50.019s 00:26:22.517 sys 0m9.246s 00:26:22.517 05:20:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:22.517 05:20:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:22.517 ************************************ 00:26:22.517 END TEST nvmf_target_disconnect 00:26:22.517 ************************************ 00:26:22.776 05:20:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:22.776 00:26:22.776 real 5m46.383s 00:26:22.776 user 10m37.333s 00:26:22.776 sys 1m50.927s 00:26:22.776 05:20:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:22.776 05:20:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.776 ************************************ 00:26:22.776 END TEST nvmf_host 00:26:22.776 ************************************ 00:26:22.776 05:20:59 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:26:22.776 05:20:59 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:26:22.776 05:20:59 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:22.776 05:20:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:22.776 05:20:59 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:22.776 05:20:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:22.776 ************************************ 00:26:22.776 START TEST nvmf_target_core_interrupt_mode 00:26:22.776 ************************************ 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:22.776 * Looking for test storage... 00:26:22.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:22.776 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:22.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.776 --rc genhtml_branch_coverage=1 00:26:22.776 --rc genhtml_function_coverage=1 00:26:22.776 --rc genhtml_legend=1 00:26:22.776 --rc geninfo_all_blocks=1 00:26:22.776 --rc geninfo_unexecuted_blocks=1 00:26:22.777 00:26:22.777 ' 00:26:22.777 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:22.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.777 --rc genhtml_branch_coverage=1 00:26:22.777 --rc genhtml_function_coverage=1 00:26:22.777 --rc genhtml_legend=1 00:26:22.777 --rc geninfo_all_blocks=1 00:26:22.777 --rc geninfo_unexecuted_blocks=1 00:26:22.777 00:26:22.777 ' 00:26:22.777 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:22.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.777 --rc genhtml_branch_coverage=1 00:26:22.777 --rc genhtml_function_coverage=1 00:26:22.777 --rc genhtml_legend=1 00:26:22.777 --rc geninfo_all_blocks=1 00:26:22.777 --rc geninfo_unexecuted_blocks=1 00:26:22.777 00:26:22.777 ' 00:26:22.777 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:22.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.777 --rc genhtml_branch_coverage=1 00:26:22.777 --rc genhtml_function_coverage=1 00:26:22.777 --rc genhtml_legend=1 00:26:22.777 --rc geninfo_all_blocks=1 00:26:22.777 --rc geninfo_unexecuted_blocks=1 00:26:22.777 00:26:22.777 ' 00:26:22.777 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:26:22.777 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:26:22.777 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:22.777 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:26:22.777 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:22.777 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:22.777 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:22.777 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:22.777 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:22.777 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:22.777 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:22.777 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:22.777 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:22.777 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:23.036 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:23.036 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:23.036 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:23.036 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:23.036 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:23.036 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:23.036 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:23.036 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:23.037 ************************************ 00:26:23.037 START TEST nvmf_abort 00:26:23.037 ************************************ 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:23.037 * Looking for test storage... 00:26:23.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:23.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.037 --rc genhtml_branch_coverage=1 00:26:23.037 --rc genhtml_function_coverage=1 00:26:23.037 --rc genhtml_legend=1 00:26:23.037 --rc geninfo_all_blocks=1 00:26:23.037 --rc geninfo_unexecuted_blocks=1 00:26:23.037 00:26:23.037 ' 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:23.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.037 --rc genhtml_branch_coverage=1 00:26:23.037 --rc genhtml_function_coverage=1 00:26:23.037 --rc genhtml_legend=1 00:26:23.037 --rc geninfo_all_blocks=1 00:26:23.037 --rc geninfo_unexecuted_blocks=1 00:26:23.037 00:26:23.037 ' 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:23.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.037 --rc genhtml_branch_coverage=1 00:26:23.037 --rc genhtml_function_coverage=1 00:26:23.037 --rc genhtml_legend=1 00:26:23.037 --rc geninfo_all_blocks=1 00:26:23.037 --rc geninfo_unexecuted_blocks=1 00:26:23.037 00:26:23.037 ' 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:23.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.037 --rc genhtml_branch_coverage=1 00:26:23.037 --rc genhtml_function_coverage=1 00:26:23.037 --rc genhtml_legend=1 00:26:23.037 --rc geninfo_all_blocks=1 00:26:23.037 --rc geninfo_unexecuted_blocks=1 00:26:23.037 00:26:23.037 ' 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:23.037 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:26:23.038 05:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:28.305 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:28.305 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:28.305 Found net devices under 0000:86:00.0: cvl_0_0 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.305 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:28.305 Found net devices under 0000:86:00.1: cvl_0_1 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:28.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.414 ms 00:26:28.306 00:26:28.306 --- 10.0.0.2 ping statistics --- 00:26:28.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.306 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:26:28.306 00:26:28.306 --- 10.0.0.1 ping statistics --- 00:26:28.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.306 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3743741 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3743741 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3743741 ']' 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.306 05:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:28.565 [2024-12-09 05:21:04.971685] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:28.565 [2024-12-09 05:21:04.972643] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:26:28.565 [2024-12-09 05:21:04.972677] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.565 [2024-12-09 05:21:05.041717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:28.565 [2024-12-09 05:21:05.083964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.565 [2024-12-09 05:21:05.084001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.565 [2024-12-09 05:21:05.084009] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.565 [2024-12-09 05:21:05.084015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.565 [2024-12-09 05:21:05.084020] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.565 [2024-12-09 05:21:05.085424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:28.565 [2024-12-09 05:21:05.085510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:28.565 [2024-12-09 05:21:05.085512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.565 [2024-12-09 05:21:05.153793] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:28.565 [2024-12-09 05:21:05.153812] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:28.565 [2024-12-09 05:21:05.153992] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:28.565 [2024-12-09 05:21:05.154078] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:28.565 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.565 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:26:28.565 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:28.565 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:28.565 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:28.825 [2024-12-09 05:21:05.218124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:28.825 Malloc0 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:28.825 Delay0 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:28.825 [2024-12-09 05:21:05.290136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.825 05:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:26:28.825 [2024-12-09 05:21:05.404795] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:31.359 Initializing NVMe Controllers 00:26:31.359 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:26:31.359 controller IO queue size 128 less than required 00:26:31.359 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:26:31.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:26:31.359 Initialization complete. Launching workers. 00:26:31.359 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36912 00:26:31.359 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36969, failed to submit 66 00:26:31.359 success 36912, unsuccessful 57, failed 0 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:31.359 rmmod nvme_tcp 00:26:31.359 rmmod nvme_fabrics 00:26:31.359 rmmod nvme_keyring 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3743741 ']' 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3743741 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3743741 ']' 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3743741 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3743741 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3743741' 00:26:31.359 killing process with pid 3743741 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3743741 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3743741 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:31.359 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.360 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.360 05:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.274 05:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:33.274 00:26:33.274 real 0m10.416s 00:26:33.274 user 0m10.045s 00:26:33.274 sys 0m5.199s 00:26:33.274 05:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:33.274 05:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:33.274 ************************************ 00:26:33.274 END TEST nvmf_abort 00:26:33.274 ************************************ 00:26:33.533 05:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:33.533 05:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:33.533 05:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:33.533 05:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:33.533 ************************************ 00:26:33.533 START TEST nvmf_ns_hotplug_stress 00:26:33.533 ************************************ 00:26:33.533 05:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:33.533 * Looking for test storage... 00:26:33.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:26:33.533 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:33.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.534 --rc genhtml_branch_coverage=1 00:26:33.534 --rc genhtml_function_coverage=1 00:26:33.534 --rc genhtml_legend=1 00:26:33.534 --rc geninfo_all_blocks=1 00:26:33.534 --rc geninfo_unexecuted_blocks=1 00:26:33.534 00:26:33.534 ' 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:33.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.534 --rc genhtml_branch_coverage=1 00:26:33.534 --rc genhtml_function_coverage=1 00:26:33.534 --rc genhtml_legend=1 00:26:33.534 --rc geninfo_all_blocks=1 00:26:33.534 --rc geninfo_unexecuted_blocks=1 00:26:33.534 00:26:33.534 ' 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:33.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.534 --rc genhtml_branch_coverage=1 00:26:33.534 --rc genhtml_function_coverage=1 00:26:33.534 --rc genhtml_legend=1 00:26:33.534 --rc geninfo_all_blocks=1 00:26:33.534 --rc geninfo_unexecuted_blocks=1 00:26:33.534 00:26:33.534 ' 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:33.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.534 --rc genhtml_branch_coverage=1 00:26:33.534 --rc genhtml_function_coverage=1 00:26:33.534 --rc genhtml_legend=1 00:26:33.534 --rc geninfo_all_blocks=1 00:26:33.534 --rc geninfo_unexecuted_blocks=1 00:26:33.534 00:26:33.534 ' 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.534 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:33.535 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:33.535 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:26:33.535 05:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:38.801 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:38.801 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:26:38.801 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:38.801 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:38.801 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:38.801 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:38.801 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:38.801 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:38.802 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:38.802 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:38.802 Found net devices under 0000:86:00.0: cvl_0_0 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:38.802 Found net devices under 0000:86:00.1: cvl_0_1 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:38.802 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:38.803 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:39.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:26:39.061 00:26:39.061 --- 10.0.0.2 ping statistics --- 00:26:39.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.061 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:39.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:26:39.061 00:26:39.061 --- 10.0.0.1 ping statistics --- 00:26:39.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.061 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3747918 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3747918 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3747918 ']' 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.061 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:39.061 [2024-12-09 05:21:15.549125] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:39.061 [2024-12-09 05:21:15.550074] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:26:39.061 [2024-12-09 05:21:15.550107] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.061 [2024-12-09 05:21:15.619669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:39.061 [2024-12-09 05:21:15.662719] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.061 [2024-12-09 05:21:15.662754] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.061 [2024-12-09 05:21:15.662762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.061 [2024-12-09 05:21:15.662768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.061 [2024-12-09 05:21:15.662773] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.061 [2024-12-09 05:21:15.664114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:39.061 [2024-12-09 05:21:15.664206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.061 [2024-12-09 05:21:15.664208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:39.321 [2024-12-09 05:21:15.732939] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:39.321 [2024-12-09 05:21:15.733027] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:39.321 [2024-12-09 05:21:15.733339] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:39.321 [2024-12-09 05:21:15.733377] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:39.321 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:39.321 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:26:39.321 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:39.321 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:39.321 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:39.321 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:39.321 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:26:39.321 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:39.321 [2024-12-09 05:21:15.964698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.580 05:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:39.580 05:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:39.840 [2024-12-09 05:21:16.345326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.840 05:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:40.099 05:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:26:40.357 Malloc0 00:26:40.357 05:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:40.357 Delay0 00:26:40.357 05:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:40.616 05:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:26:40.874 NULL1 00:26:40.874 05:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:26:41.133 05:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3748382 00:26:41.133 05:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:26:41.133 05:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:26:41.133 05:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:41.133 05:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:41.391 05:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:26:41.391 05:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:26:41.649 true 00:26:41.649 05:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:26:41.649 05:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:41.907 05:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:42.165 05:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:26:42.165 05:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:26:42.165 true 00:26:42.165 05:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:26:42.165 05:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:43.536 Read completed with error (sct=0, sc=11) 00:26:43.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:43.536 05:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:43.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:43.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:43.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:43.536 05:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:26:43.536 05:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:26:43.794 true 00:26:43.794 05:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:26:43.794 05:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:44.052 05:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:44.340 05:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:26:44.340 05:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:26:44.340 true 00:26:44.340 05:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:26:44.340 05:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:44.642 05:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:44.900 05:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:26:44.900 05:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:26:44.900 true 00:26:44.900 05:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:26:44.900 05:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:45.157 05:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:45.415 05:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:26:45.415 05:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:26:45.673 true 00:26:45.673 05:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:26:45.673 05:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:46.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:46.609 05:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:46.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:46.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:46.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:46.867 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:46.867 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:46.867 05:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:26:46.867 05:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:26:47.126 true 00:26:47.126 05:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:26:47.126 05:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:48.063 05:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:48.063 05:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:26:48.063 05:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:26:48.322 true 00:26:48.322 05:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:26:48.322 05:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:48.587 05:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:48.587 05:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:26:48.587 05:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:26:48.851 true 00:26:48.851 05:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:26:48.851 05:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:49.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:49.788 05:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:49.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:50.047 05:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:26:50.047 05:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:26:50.306 true 00:26:50.306 05:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:26:50.306 05:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:50.565 05:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:50.565 05:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:26:50.565 05:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:26:50.824 true 00:26:50.824 05:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:26:50.824 05:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:52.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:52.201 05:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:52.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:52.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:52.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:52.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:52.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:52.201 05:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:26:52.201 05:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:26:52.460 true 00:26:52.460 05:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:26:52.460 05:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:53.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:53.393 05:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:53.393 05:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:26:53.394 05:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:26:53.652 true 00:26:53.652 05:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:26:53.652 05:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:53.910 05:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:54.167 05:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:26:54.167 05:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:26:54.167 true 00:26:54.167 05:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:26:54.167 05:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:55.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.539 05:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:55.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.539 05:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:26:55.539 05:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:26:55.796 true 00:26:55.796 05:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:26:55.796 05:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:56.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:56.727 05:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:56.727 05:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:26:56.727 05:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:26:56.985 true 00:26:56.985 05:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:26:56.985 05:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:57.242 05:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:57.500 05:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:26:57.500 05:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:26:57.500 true 00:26:57.500 05:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:26:57.500 05:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:58.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:58.877 05:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:58.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:58.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:58.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:58.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:58.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:58.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:58.877 05:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:26:58.877 05:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:26:59.135 true 00:26:59.135 05:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:26:59.135 05:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:00.070 05:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:00.070 05:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:00.070 05:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:00.329 true 00:27:00.329 05:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:27:00.329 05:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:00.586 05:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:00.844 05:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:27:00.844 05:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:27:00.844 true 00:27:00.844 05:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:27:00.844 05:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:02.221 05:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:02.221 05:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:27:02.221 05:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:27:02.479 true 00:27:02.479 05:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:27:02.479 05:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:02.479 05:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:02.737 05:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:27:02.737 05:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:27:02.995 true 00:27:02.995 05:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:27:02.995 05:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:03.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:04.186 05:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:04.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:04.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:04.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:04.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:04.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:04.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:04.186 05:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:27:04.186 05:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:27:04.443 true 00:27:04.443 05:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:27:04.443 05:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:05.376 05:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:05.634 05:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:27:05.634 05:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:27:05.634 true 00:27:05.634 05:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:27:05.634 05:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:05.891 05:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:06.148 05:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:27:06.148 05:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:27:06.405 true 00:27:06.405 05:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:27:06.405 05:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:07.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:07.341 05:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:07.600 05:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:27:07.600 05:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:27:07.600 true 00:27:07.859 05:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:27:07.859 05:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:07.859 05:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:08.117 05:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:27:08.117 05:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:27:08.376 true 00:27:08.376 05:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:27:08.376 05:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:09.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:09.579 05:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:09.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:09.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:09.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:09.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:09.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:09.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:09.579 05:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:27:09.579 05:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:27:09.837 true 00:27:09.837 05:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:27:09.837 05:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:10.771 05:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:11.030 05:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:27:11.030 05:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:27:11.030 true 00:27:11.030 05:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:27:11.030 05:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:11.288 Initializing NVMe Controllers 00:27:11.288 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:11.288 Controller IO queue size 128, less than required. 00:27:11.288 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:11.288 Controller IO queue size 128, less than required. 00:27:11.288 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:11.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:11.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:11.288 Initialization complete. Launching workers. 00:27:11.288 ======================================================== 00:27:11.288 Latency(us) 00:27:11.288 Device Information : IOPS MiB/s Average min max 00:27:11.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1720.12 0.84 45599.99 2255.58 1012279.65 00:27:11.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15770.75 7.70 8095.50 1333.14 382880.93 00:27:11.288 ======================================================== 00:27:11.288 Total : 17490.87 8.54 11783.84 1333.14 1012279.65 00:27:11.288 00:27:11.288 05:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:11.546 05:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:27:11.546 05:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:27:11.803 true 00:27:11.803 05:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3748382 00:27:11.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3748382) - No such process 00:27:11.803 05:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3748382 00:27:11.803 05:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:11.803 05:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:12.060 05:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:27:12.060 05:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:27:12.060 05:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:27:12.060 05:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:12.060 05:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:27:12.318 null0 00:27:12.318 05:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:12.318 05:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:12.318 05:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:27:12.576 null1 00:27:12.576 05:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:12.576 05:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:12.576 05:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:27:12.576 null2 00:27:12.576 05:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:12.576 05:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:12.576 05:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:27:12.833 null3 00:27:12.833 05:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:12.833 05:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:12.833 05:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:27:13.090 null4 00:27:13.090 05:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:13.090 05:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:13.091 05:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:27:13.091 null5 00:27:13.091 05:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:13.091 05:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:13.091 05:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:27:13.348 null6 00:27:13.348 05:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:13.348 05:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:13.348 05:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:27:13.607 null7 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:27:13.607 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3753706 3753709 3753712 3753715 3753718 3753721 3753724 3753727 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:13.608 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:13.866 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:13.866 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:13.866 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:13.866 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:13.866 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:13.866 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:13.866 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:13.866 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:14.123 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:14.380 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:14.380 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:14.380 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:14.380 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:14.380 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:14.380 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.380 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.380 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:14.380 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.380 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.380 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:14.381 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.381 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.381 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:14.381 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.381 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.381 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:14.381 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.381 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.381 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:14.381 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.381 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.381 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:14.381 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.381 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.381 05:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:14.381 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.381 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.381 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:14.638 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:14.638 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:14.638 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:14.638 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:14.638 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:14.638 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:14.638 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:14.638 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:14.896 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:15.154 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:15.154 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:15.154 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:15.154 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:15.154 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:15.154 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:15.154 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:15.154 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:15.154 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:15.154 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:15.154 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:15.154 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:15.154 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:15.154 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:15.154 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:15.154 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:15.154 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:15.154 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:15.154 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:15.154 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:15.413 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:15.413 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:15.413 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:15.413 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:15.413 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:15.413 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:15.413 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:15.413 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:15.413 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:15.413 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:15.413 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:15.413 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:15.413 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:15.413 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:15.413 05:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:15.413 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:15.413 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:15.413 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:15.413 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:15.413 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:15.672 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:15.672 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:15.672 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:15.672 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:15.672 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:15.672 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:15.672 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:15.672 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:15.672 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:15.672 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:15.672 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:15.672 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:15.672 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:15.672 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:15.672 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:15.672 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:15.672 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:15.673 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:15.673 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:15.673 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:15.673 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:15.673 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:15.673 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:15.673 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:15.931 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:15.931 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:15.931 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:15.931 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:15.931 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:15.931 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:15.931 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:15.931 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:15.931 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:15.931 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:15.932 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:16.191 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:16.449 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:16.449 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:16.449 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:16.449 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:16.449 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:16.449 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.449 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.449 05:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:16.449 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.449 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.449 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:16.449 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.449 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.449 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:16.449 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.450 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.450 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:16.450 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.450 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.450 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:16.450 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.450 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.450 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:16.450 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.450 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.450 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:16.450 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.450 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.450 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:16.709 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:16.709 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:16.709 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:16.709 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:16.709 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:16.709 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:16.709 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:16.709 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:16.968 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:17.227 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:17.227 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:17.227 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:17.227 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:17.227 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:17.227 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:17.227 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:17.227 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:17.227 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:17.227 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:17.227 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:17.227 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:17.227 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:17.486 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:17.486 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:17.486 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:17.486 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:17.486 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:17.486 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:17.486 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:17.486 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:17.486 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:17.486 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:17.486 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:17.486 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:17.486 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:17.486 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:17.486 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:17.486 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:17.486 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:17.486 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:17.486 05:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:17.486 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:17.486 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:17.486 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:17.486 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:17.486 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:17.486 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:17.486 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:17.744 rmmod nvme_tcp 00:27:17.744 rmmod nvme_fabrics 00:27:17.744 rmmod nvme_keyring 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3747918 ']' 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3747918 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3747918 ']' 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3747918 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:17.744 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3747918 00:27:18.003 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:18.003 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:18.003 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3747918' 00:27:18.003 killing process with pid 3747918 00:27:18.003 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3747918 00:27:18.003 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3747918 00:27:18.003 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:18.003 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:18.003 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:18.003 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:27:18.003 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:27:18.003 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:18.003 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:27:18.003 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:18.003 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:18.003 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.003 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.003 05:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:20.544 00:27:20.544 real 0m46.745s 00:27:20.544 user 2m58.596s 00:27:20.544 sys 0m19.160s 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:20.544 ************************************ 00:27:20.544 END TEST nvmf_ns_hotplug_stress 00:27:20.544 ************************************ 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:20.544 ************************************ 00:27:20.544 START TEST nvmf_delete_subsystem 00:27:20.544 ************************************ 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:20.544 * Looking for test storage... 00:27:20.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:20.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.544 --rc genhtml_branch_coverage=1 00:27:20.544 --rc genhtml_function_coverage=1 00:27:20.544 --rc genhtml_legend=1 00:27:20.544 --rc geninfo_all_blocks=1 00:27:20.544 --rc geninfo_unexecuted_blocks=1 00:27:20.544 00:27:20.544 ' 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:20.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.544 --rc genhtml_branch_coverage=1 00:27:20.544 --rc genhtml_function_coverage=1 00:27:20.544 --rc genhtml_legend=1 00:27:20.544 --rc geninfo_all_blocks=1 00:27:20.544 --rc geninfo_unexecuted_blocks=1 00:27:20.544 00:27:20.544 ' 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:20.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.544 --rc genhtml_branch_coverage=1 00:27:20.544 --rc genhtml_function_coverage=1 00:27:20.544 --rc genhtml_legend=1 00:27:20.544 --rc geninfo_all_blocks=1 00:27:20.544 --rc geninfo_unexecuted_blocks=1 00:27:20.544 00:27:20.544 ' 00:27:20.544 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:20.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.545 --rc genhtml_branch_coverage=1 00:27:20.545 --rc genhtml_function_coverage=1 00:27:20.545 --rc genhtml_legend=1 00:27:20.545 --rc geninfo_all_blocks=1 00:27:20.545 --rc geninfo_unexecuted_blocks=1 00:27:20.545 00:27:20.545 ' 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:27:20.545 05:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:25.807 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:25.807 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:25.807 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:25.808 Found net devices under 0000:86:00.0: cvl_0_0 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:25.808 Found net devices under 0000:86:00.1: cvl_0_1 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:25.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:27:25.808 00:27:25.808 --- 10.0.0.2 ping statistics --- 00:27:25.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.808 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:25.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:27:25.808 00:27:25.808 --- 10.0.0.1 ping statistics --- 00:27:25.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.808 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3757911 00:27:25.808 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3757911 00:27:26.067 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:27:26.067 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3757911 ']' 00:27:26.067 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.067 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:26.067 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.067 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:26.067 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:26.067 [2024-12-09 05:22:02.501116] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:26.067 [2024-12-09 05:22:02.502109] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:27:26.067 [2024-12-09 05:22:02.502151] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.067 [2024-12-09 05:22:02.572399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:26.067 [2024-12-09 05:22:02.615158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.067 [2024-12-09 05:22:02.615194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.067 [2024-12-09 05:22:02.615202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:26.067 [2024-12-09 05:22:02.615208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:26.067 [2024-12-09 05:22:02.615213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.067 [2024-12-09 05:22:02.616373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.067 [2024-12-09 05:22:02.616376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.067 [2024-12-09 05:22:02.686633] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:26.067 [2024-12-09 05:22:02.686826] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:26.067 [2024-12-09 05:22:02.686893] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:26.067 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:26.067 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:27:26.067 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:26.067 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:26.067 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:26.326 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:26.326 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:26.326 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.326 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:26.326 [2024-12-09 05:22:02.748928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.326 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.326 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:26.326 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.326 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:26.326 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.326 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:26.326 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.326 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:26.327 [2024-12-09 05:22:02.773436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:26.327 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.327 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:27:26.327 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.327 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:26.327 NULL1 00:27:26.327 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.327 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:26.327 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.327 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:26.327 Delay0 00:27:26.327 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.327 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:26.327 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.327 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:26.327 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.327 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3758125 00:27:26.327 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:27:26.327 05:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:26.327 [2024-12-09 05:22:02.872245] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:28.231 05:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:28.231 05:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.231 05:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 starting I/O failed: -6 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 starting I/O failed: -6 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 starting I/O failed: -6 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 [2024-12-09 05:22:04.958448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1467680 is same with the state(6) to be set 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 starting I/O failed: -6 00:27:28.491 starting I/O failed: -6 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Write completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.491 starting I/O failed: -6 00:27:28.491 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 [2024-12-09 05:22:04.959151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f684000d680 is same with the state(6) to be set 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Write completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 Read completed with error (sct=0, sc=8) 00:27:28.492 starting I/O failed: -6 00:27:28.492 [2024-12-09 05:22:04.959577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f684000d020 is same with the state(6) to be set 00:27:29.428 [2024-12-09 05:22:05.926408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14689b0 is same with the state(6) to be set 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.428 Write completed with error (sct=0, sc=8) 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.428 Write completed with error (sct=0, sc=8) 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.428 Write completed with error (sct=0, sc=8) 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.428 Write completed with error (sct=0, sc=8) 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.428 Write completed with error (sct=0, sc=8) 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.428 Write completed with error (sct=0, sc=8) 00:27:29.428 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 [2024-12-09 05:22:05.960552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14674a0 is same with the state(6) to be set 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 [2024-12-09 05:22:05.960744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f684000d350 is same with the state(6) to be set 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 [2024-12-09 05:22:05.960898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1467860 is same with the state(6) to be set 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Read completed with error (sct=0, sc=8) 00:27:29.429 Write completed with error (sct=0, sc=8) 00:27:29.429 [2024-12-09 05:22:05.961589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14672c0 is same with the state(6) to be set 00:27:29.429 Initializing NVMe Controllers 00:27:29.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:29.429 Controller IO queue size 128, less than required. 00:27:29.429 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:29.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:29.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:29.429 Initialization complete. Launching workers. 00:27:29.429 ======================================================== 00:27:29.429 Latency(us) 00:27:29.429 Device Information : IOPS MiB/s Average min max 00:27:29.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 186.58 0.09 951671.54 1099.31 1012201.28 00:27:29.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 175.67 0.09 855548.53 366.43 1013120.76 00:27:29.429 ======================================================== 00:27:29.429 Total : 362.25 0.18 905058.46 366.43 1013120.76 00:27:29.429 00:27:29.429 [2024-12-09 05:22:05.962232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14689b0 (9): Bad file descriptor 00:27:29.429 05:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.429 05:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:27:29.429 05:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3758125 00:27:29.429 05:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:27:29.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:27:29.997 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:27:29.997 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3758125 00:27:29.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3758125) - No such process 00:27:29.997 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3758125 00:27:29.997 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:27:29.997 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3758125 00:27:29.997 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:27:29.997 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:29.997 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:27:29.997 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:29.997 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3758125 00:27:29.997 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:27:29.997 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:29.997 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:29.997 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:29.997 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:29.998 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.998 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:29.998 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.998 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:29.998 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.998 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:29.998 [2024-12-09 05:22:06.493365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:29.998 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.998 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:29.998 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.998 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:29.998 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.998 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3758595 00:27:29.998 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:27:29.998 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:29.998 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3758595 00:27:29.998 05:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:29.998 [2024-12-09 05:22:06.565087] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:30.566 05:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:30.566 05:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3758595 00:27:30.566 05:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:31.133 05:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:31.133 05:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3758595 00:27:31.133 05:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:31.392 05:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:31.392 05:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3758595 00:27:31.392 05:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:31.958 05:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:31.958 05:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3758595 00:27:31.958 05:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:32.524 05:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:32.524 05:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3758595 00:27:32.524 05:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:33.090 05:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:33.090 05:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3758595 00:27:33.090 05:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:33.090 Initializing NVMe Controllers 00:27:33.090 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:33.090 Controller IO queue size 128, less than required. 00:27:33.090 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:33.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:33.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:33.090 Initialization complete. Launching workers. 00:27:33.090 ======================================================== 00:27:33.090 Latency(us) 00:27:33.090 Device Information : IOPS MiB/s Average min max 00:27:33.090 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003547.04 1000155.10 1011684.86 00:27:33.090 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005568.09 1000309.89 1042192.30 00:27:33.090 ======================================================== 00:27:33.090 Total : 256.00 0.12 1004557.56 1000155.10 1042192.30 00:27:33.090 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3758595 00:27:33.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3758595) - No such process 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3758595 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:33.657 rmmod nvme_tcp 00:27:33.657 rmmod nvme_fabrics 00:27:33.657 rmmod nvme_keyring 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3757911 ']' 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3757911 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3757911 ']' 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3757911 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3757911 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3757911' 00:27:33.657 killing process with pid 3757911 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3757911 00:27:33.657 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3757911 00:27:33.916 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:33.916 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:33.916 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:33.916 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:27:33.916 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:33.916 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:27:33.916 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:27:33.916 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:33.916 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:33.916 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.916 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.916 05:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.912 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:35.912 00:27:35.912 real 0m15.659s 00:27:35.912 user 0m25.957s 00:27:35.912 sys 0m5.755s 00:27:35.912 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:35.912 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:35.912 ************************************ 00:27:35.912 END TEST nvmf_delete_subsystem 00:27:35.912 ************************************ 00:27:35.912 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:35.912 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:35.912 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:35.912 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:35.912 ************************************ 00:27:35.912 START TEST nvmf_host_management 00:27:35.912 ************************************ 00:27:35.912 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:36.192 * Looking for test storage... 00:27:36.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:27:36.192 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:36.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.193 --rc genhtml_branch_coverage=1 00:27:36.193 --rc genhtml_function_coverage=1 00:27:36.193 --rc genhtml_legend=1 00:27:36.193 --rc geninfo_all_blocks=1 00:27:36.193 --rc geninfo_unexecuted_blocks=1 00:27:36.193 00:27:36.193 ' 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:36.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.193 --rc genhtml_branch_coverage=1 00:27:36.193 --rc genhtml_function_coverage=1 00:27:36.193 --rc genhtml_legend=1 00:27:36.193 --rc geninfo_all_blocks=1 00:27:36.193 --rc geninfo_unexecuted_blocks=1 00:27:36.193 00:27:36.193 ' 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:36.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.193 --rc genhtml_branch_coverage=1 00:27:36.193 --rc genhtml_function_coverage=1 00:27:36.193 --rc genhtml_legend=1 00:27:36.193 --rc geninfo_all_blocks=1 00:27:36.193 --rc geninfo_unexecuted_blocks=1 00:27:36.193 00:27:36.193 ' 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:36.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.193 --rc genhtml_branch_coverage=1 00:27:36.193 --rc genhtml_function_coverage=1 00:27:36.193 --rc genhtml_legend=1 00:27:36.193 --rc geninfo_all_blocks=1 00:27:36.193 --rc geninfo_unexecuted_blocks=1 00:27:36.193 00:27:36.193 ' 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:27:36.193 05:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:41.467 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:41.467 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:41.467 Found net devices under 0000:86:00.0: cvl_0_0 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:41.467 Found net devices under 0000:86:00.1: cvl_0_1 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:41.467 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:41.468 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.468 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.468 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:41.468 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:41.468 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:41.468 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:41.468 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:41.468 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:41.468 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:41.468 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.468 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:41.468 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:41.468 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:41.468 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:41.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:27:41.726 00:27:41.726 --- 10.0.0.2 ping statistics --- 00:27:41.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.726 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:41.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:27:41.726 00:27:41.726 --- 10.0.0.1 ping statistics --- 00:27:41.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.726 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3762687 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3762687 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3762687 ']' 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:41.726 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:41.984 [2024-12-09 05:22:18.399986] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:41.984 [2024-12-09 05:22:18.400979] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:27:41.984 [2024-12-09 05:22:18.401018] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.984 [2024-12-09 05:22:18.469536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:41.984 [2024-12-09 05:22:18.516682] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.984 [2024-12-09 05:22:18.516719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.984 [2024-12-09 05:22:18.516727] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.984 [2024-12-09 05:22:18.516734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.985 [2024-12-09 05:22:18.516739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.985 [2024-12-09 05:22:18.518248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:41.985 [2024-12-09 05:22:18.518323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:41.985 [2024-12-09 05:22:18.518433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.985 [2024-12-09 05:22:18.518434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:41.985 [2024-12-09 05:22:18.587513] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:41.985 [2024-12-09 05:22:18.587667] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:41.985 [2024-12-09 05:22:18.588109] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:41.985 [2024-12-09 05:22:18.588149] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:41.985 [2024-12-09 05:22:18.588295] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:41.985 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:41.985 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:41.985 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:41.985 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:41.985 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:42.242 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.242 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:42.242 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.242 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:42.242 [2024-12-09 05:22:18.650868] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.242 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.242 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:27:42.242 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:42.242 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:42.242 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:42.242 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:42.243 Malloc0 00:27:42.243 [2024-12-09 05:22:18.719092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3762847 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3762847 /var/tmp/bdevperf.sock 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3762847 ']' 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:42.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:42.243 { 00:27:42.243 "params": { 00:27:42.243 "name": "Nvme$subsystem", 00:27:42.243 "trtype": "$TEST_TRANSPORT", 00:27:42.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.243 "adrfam": "ipv4", 00:27:42.243 "trsvcid": "$NVMF_PORT", 00:27:42.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.243 "hdgst": ${hdgst:-false}, 00:27:42.243 "ddgst": ${ddgst:-false} 00:27:42.243 }, 00:27:42.243 "method": "bdev_nvme_attach_controller" 00:27:42.243 } 00:27:42.243 EOF 00:27:42.243 )") 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:42.243 05:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:42.243 "params": { 00:27:42.243 "name": "Nvme0", 00:27:42.243 "trtype": "tcp", 00:27:42.243 "traddr": "10.0.0.2", 00:27:42.243 "adrfam": "ipv4", 00:27:42.243 "trsvcid": "4420", 00:27:42.243 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:42.243 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:42.243 "hdgst": false, 00:27:42.243 "ddgst": false 00:27:42.243 }, 00:27:42.243 "method": "bdev_nvme_attach_controller" 00:27:42.243 }' 00:27:42.243 [2024-12-09 05:22:18.817128] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:27:42.243 [2024-12-09 05:22:18.817173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3762847 ] 00:27:42.243 [2024-12-09 05:22:18.883498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.500 [2024-12-09 05:22:18.927914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.758 Running I/O for 10 seconds... 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:27:42.758 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:27:43.016 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:27:43.016 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:43.016 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:43.016 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:43.016 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.016 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:43.016 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.017 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:27:43.017 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:27:43.017 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:27:43.017 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:27:43.017 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:27:43.017 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:43.017 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.017 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:43.017 [2024-12-09 05:22:19.658871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.658915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.658923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.658930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.658941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.658948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.658954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.658960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.658966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.658972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.658978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.658984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.658990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.658996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.017 [2024-12-09 05:22:19.659260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2d70 is same with the state(6) to be set 00:27:43.277 [2024-12-09 05:22:19.660528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.277 [2024-12-09 05:22:19.660563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.277 [2024-12-09 05:22:19.660573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.277 [2024-12-09 05:22:19.660580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.277 [2024-12-09 05:22:19.660588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.277 [2024-12-09 05:22:19.660600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.277 [2024-12-09 05:22:19.660607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.277 [2024-12-09 05:22:19.660614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.277 [2024-12-09 05:22:19.660621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6510 is same with the state(6) to be set 00:27:43.277 [2024-12-09 05:22:19.662037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.277 [2024-12-09 05:22:19.662061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.277 [2024-12-09 05:22:19.662076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.277 [2024-12-09 05:22:19.662083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.277 [2024-12-09 05:22:19.662092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.277 [2024-12-09 05:22:19.662100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.277 [2024-12-09 05:22:19.662109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.277 [2024-12-09 05:22:19.662116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.278 [2024-12-09 05:22:19.662679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.278 [2024-12-09 05:22:19.662685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.662693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.662700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.662708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.662715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.662724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.662731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.662740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.662746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.662754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.662761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.662769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.662776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.662784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.662791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.662799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.662811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.662819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.662826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.662835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.662842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.662850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.662857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.662866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.662872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.662880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.662888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.662896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.662903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.662911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.662918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.662926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.662932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.662941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.662947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.662956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.662962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.662970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.662976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.662985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.662995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.663010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.663017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.663026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.663033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.663041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.279 [2024-12-09 05:22:19.663048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.279 [2024-12-09 05:22:19.664043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:43.279 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:43.279 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.279 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:43.279 task offset: 92160 on job bdev=Nvme0n1 fails 00:27:43.279 00:27:43.279 Latency(us) 00:27:43.279 [2024-12-09T04:22:19.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.279 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.279 Job: Nvme0n1 ended in about 0.41 seconds with error 00:27:43.279 Verification LBA range: start 0x0 length 0x400 00:27:43.279 Nvme0n1 : 0.41 1745.69 109.11 155.17 0.00 32787.61 1381.95 27696.08 00:27:43.279 [2024-12-09T04:22:19.925Z] =================================================================================================================== 00:27:43.279 [2024-12-09T04:22:19.925Z] Total : 1745.69 109.11 155.17 0.00 32787.61 1381.95 27696.08 00:27:43.279 [2024-12-09 05:22:19.666455] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:43.279 [2024-12-09 05:22:19.666476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f6510 (9): Bad file descriptor 00:27:43.279 [2024-12-09 05:22:19.667574] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:27:43.279 [2024-12-09 05:22:19.667667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:43.279 [2024-12-09 05:22:19.667691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.279 [2024-12-09 05:22:19.667707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:27:43.279 [2024-12-09 05:22:19.667716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:27:43.279 [2024-12-09 05:22:19.667723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.279 [2024-12-09 05:22:19.667730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14f6510 00:27:43.279 [2024-12-09 05:22:19.667750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f6510 (9): Bad file descriptor 00:27:43.279 [2024-12-09 05:22:19.667762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:43.279 [2024-12-09 05:22:19.667773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:43.279 [2024-12-09 05:22:19.667781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:43.279 [2024-12-09 05:22:19.667789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:43.279 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.279 05:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:27:44.257 05:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3762847 00:27:44.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3762847) - No such process 00:27:44.257 05:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:27:44.257 05:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:27:44.257 05:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:44.257 05:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:27:44.258 05:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:44.258 05:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:44.258 05:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:44.258 05:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:44.258 { 00:27:44.258 "params": { 00:27:44.258 "name": "Nvme$subsystem", 00:27:44.258 "trtype": "$TEST_TRANSPORT", 00:27:44.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.258 "adrfam": "ipv4", 00:27:44.258 "trsvcid": "$NVMF_PORT", 00:27:44.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.258 "hdgst": ${hdgst:-false}, 00:27:44.258 "ddgst": ${ddgst:-false} 00:27:44.258 }, 00:27:44.258 "method": "bdev_nvme_attach_controller" 00:27:44.258 } 00:27:44.258 EOF 00:27:44.258 )") 00:27:44.258 05:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:44.258 05:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:44.258 05:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:44.258 05:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:44.258 "params": { 00:27:44.258 "name": "Nvme0", 00:27:44.258 "trtype": "tcp", 00:27:44.258 "traddr": "10.0.0.2", 00:27:44.258 "adrfam": "ipv4", 00:27:44.258 "trsvcid": "4420", 00:27:44.258 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:44.258 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:44.258 "hdgst": false, 00:27:44.258 "ddgst": false 00:27:44.258 }, 00:27:44.258 "method": "bdev_nvme_attach_controller" 00:27:44.258 }' 00:27:44.258 [2024-12-09 05:22:20.730860] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:27:44.258 [2024-12-09 05:22:20.730912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3763099 ] 00:27:44.258 [2024-12-09 05:22:20.796057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.258 [2024-12-09 05:22:20.837231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.522 Running I/O for 1 seconds... 00:27:45.902 1920.00 IOPS, 120.00 MiB/s 00:27:45.902 Latency(us) 00:27:45.902 [2024-12-09T04:22:22.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.902 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.902 Verification LBA range: start 0x0 length 0x400 00:27:45.902 Nvme0n1 : 1.03 1930.22 120.64 0.00 0.00 32646.09 6696.07 27354.16 00:27:45.902 [2024-12-09T04:22:22.548Z] =================================================================================================================== 00:27:45.902 [2024-12-09T04:22:22.548Z] Total : 1930.22 120.64 0.00 0.00 32646.09 6696.07 27354.16 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:45.902 rmmod nvme_tcp 00:27:45.902 rmmod nvme_fabrics 00:27:45.902 rmmod nvme_keyring 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3762687 ']' 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3762687 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3762687 ']' 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3762687 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3762687 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3762687' 00:27:45.902 killing process with pid 3762687 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3762687 00:27:45.902 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3762687 00:27:46.161 [2024-12-09 05:22:22.668253] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:27:46.161 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:46.161 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:46.161 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:46.161 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:27:46.161 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:27:46.161 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:27:46.161 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:46.161 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:46.161 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:46.161 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.161 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:46.161 05:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:48.698 00:27:48.698 real 0m12.288s 00:27:48.698 user 0m18.843s 00:27:48.698 sys 0m6.119s 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:48.698 ************************************ 00:27:48.698 END TEST nvmf_host_management 00:27:48.698 ************************************ 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:48.698 ************************************ 00:27:48.698 START TEST nvmf_lvol 00:27:48.698 ************************************ 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:48.698 * Looking for test storage... 00:27:48.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:48.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.698 --rc genhtml_branch_coverage=1 00:27:48.698 --rc genhtml_function_coverage=1 00:27:48.698 --rc genhtml_legend=1 00:27:48.698 --rc geninfo_all_blocks=1 00:27:48.698 --rc geninfo_unexecuted_blocks=1 00:27:48.698 00:27:48.698 ' 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:48.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.698 --rc genhtml_branch_coverage=1 00:27:48.698 --rc genhtml_function_coverage=1 00:27:48.698 --rc genhtml_legend=1 00:27:48.698 --rc geninfo_all_blocks=1 00:27:48.698 --rc geninfo_unexecuted_blocks=1 00:27:48.698 00:27:48.698 ' 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:48.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.698 --rc genhtml_branch_coverage=1 00:27:48.698 --rc genhtml_function_coverage=1 00:27:48.698 --rc genhtml_legend=1 00:27:48.698 --rc geninfo_all_blocks=1 00:27:48.698 --rc geninfo_unexecuted_blocks=1 00:27:48.698 00:27:48.698 ' 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:48.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.698 --rc genhtml_branch_coverage=1 00:27:48.698 --rc genhtml_function_coverage=1 00:27:48.698 --rc genhtml_legend=1 00:27:48.698 --rc geninfo_all_blocks=1 00:27:48.698 --rc geninfo_unexecuted_blocks=1 00:27:48.698 00:27:48.698 ' 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:48.698 05:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:48.698 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:48.698 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:48.698 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:48.698 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:48.698 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:48.698 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:48.698 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:48.698 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:48.698 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:48.698 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:27:48.698 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:48.698 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:48.698 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:48.698 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.698 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.698 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.698 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:27:48.698 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.698 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:27:48.698 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:27:48.699 05:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:53.963 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:53.963 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:27:53.963 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:53.963 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:53.963 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:53.964 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:53.964 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:53.964 Found net devices under 0000:86:00.0: cvl_0_0 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:53.964 Found net devices under 0000:86:00.1: cvl_0_1 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:53.964 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:53.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:53.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:27:53.965 00:27:53.965 --- 10.0.0.2 ping statistics --- 00:27:53.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.965 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:53.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:53.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:27:53.965 00:27:53.965 --- 10.0.0.1 ping statistics --- 00:27:53.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.965 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3766857 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3766857 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3766857 ']' 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:53.965 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:54.223 [2024-12-09 05:22:30.646812] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:54.223 [2024-12-09 05:22:30.647819] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:27:54.223 [2024-12-09 05:22:30.647859] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:54.223 [2024-12-09 05:22:30.717159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:54.223 [2024-12-09 05:22:30.761215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:54.223 [2024-12-09 05:22:30.761252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:54.223 [2024-12-09 05:22:30.761260] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:54.223 [2024-12-09 05:22:30.761270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:54.223 [2024-12-09 05:22:30.761276] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:54.223 [2024-12-09 05:22:30.762652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.223 [2024-12-09 05:22:30.762747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:54.223 [2024-12-09 05:22:30.762750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.223 [2024-12-09 05:22:30.832053] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:54.223 [2024-12-09 05:22:30.832054] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:54.223 [2024-12-09 05:22:30.832153] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:54.223 [2024-12-09 05:22:30.832283] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:54.223 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:54.223 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:27:54.223 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:54.223 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:54.223 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:54.480 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.480 05:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:54.480 [2024-12-09 05:22:31.075230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:54.480 05:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:54.738 05:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:27:54.738 05:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:54.996 05:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:27:54.996 05:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:27:55.254 05:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:27:55.512 05:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d21994a5-c77e-4629-9f68-ef45774f72f6 00:27:55.512 05:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d21994a5-c77e-4629-9f68-ef45774f72f6 lvol 20 00:27:55.512 05:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a7ffeada-fb02-4aac-942a-b0140d5a45f0 00:27:55.512 05:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:55.770 05:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a7ffeada-fb02-4aac-942a-b0140d5a45f0 00:27:56.028 05:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:56.285 [2024-12-09 05:22:32.699355] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:56.286 05:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:56.286 05:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3767340 00:27:56.286 05:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:27:56.286 05:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:27:57.662 05:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a7ffeada-fb02-4aac-942a-b0140d5a45f0 MY_SNAPSHOT 00:27:57.662 05:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=10166331-3214-4843-9656-ce64b0a53762 00:27:57.662 05:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a7ffeada-fb02-4aac-942a-b0140d5a45f0 30 00:27:57.921 05:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 10166331-3214-4843-9656-ce64b0a53762 MY_CLONE 00:27:58.180 05:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9450c958-62a7-47b3-b2f5-8a8ff774dd2c 00:27:58.180 05:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 9450c958-62a7-47b3-b2f5-8a8ff774dd2c 00:27:58.747 05:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3767340 00:28:06.882 Initializing NVMe Controllers 00:28:06.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:06.882 Controller IO queue size 128, less than required. 00:28:06.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:28:06.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:28:06.882 Initialization complete. Launching workers. 00:28:06.882 ======================================================== 00:28:06.882 Latency(us) 00:28:06.882 Device Information : IOPS MiB/s Average min max 00:28:06.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12251.48 47.86 10448.39 1102.31 52189.36 00:28:06.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12160.88 47.50 10523.36 3067.33 46246.80 00:28:06.882 ======================================================== 00:28:06.882 Total : 24412.35 95.36 10485.74 1102.31 52189.36 00:28:06.882 00:28:06.882 05:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:07.141 05:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a7ffeada-fb02-4aac-942a-b0140d5a45f0 00:28:07.141 05:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d21994a5-c77e-4629-9f68-ef45774f72f6 00:28:07.400 05:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:28:07.400 05:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:28:07.400 05:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:28:07.400 05:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:07.400 05:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:28:07.400 05:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:07.400 05:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:28:07.400 05:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:07.400 05:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:07.400 rmmod nvme_tcp 00:28:07.400 rmmod nvme_fabrics 00:28:07.400 rmmod nvme_keyring 00:28:07.400 05:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:07.400 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:28:07.400 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:28:07.400 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3766857 ']' 00:28:07.400 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3766857 00:28:07.400 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3766857 ']' 00:28:07.400 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3766857 00:28:07.400 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:28:07.400 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:07.400 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3766857 00:28:07.659 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:07.659 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:07.659 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3766857' 00:28:07.659 killing process with pid 3766857 00:28:07.659 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3766857 00:28:07.659 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3766857 00:28:07.659 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:07.659 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:07.659 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:07.659 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:28:07.659 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:28:07.659 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:07.659 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:28:07.917 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:07.917 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:07.917 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.917 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.917 05:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.821 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:09.821 00:28:09.821 real 0m21.534s 00:28:09.821 user 0m55.710s 00:28:09.821 sys 0m9.551s 00:28:09.821 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:09.821 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:09.821 ************************************ 00:28:09.821 END TEST nvmf_lvol 00:28:09.821 ************************************ 00:28:09.821 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:09.821 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:09.821 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:09.821 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:09.821 ************************************ 00:28:09.821 START TEST nvmf_lvs_grow 00:28:09.821 ************************************ 00:28:09.821 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:10.080 * Looking for test storage... 00:28:10.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:10.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.080 --rc genhtml_branch_coverage=1 00:28:10.080 --rc genhtml_function_coverage=1 00:28:10.080 --rc genhtml_legend=1 00:28:10.080 --rc geninfo_all_blocks=1 00:28:10.080 --rc geninfo_unexecuted_blocks=1 00:28:10.080 00:28:10.080 ' 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:10.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.080 --rc genhtml_branch_coverage=1 00:28:10.080 --rc genhtml_function_coverage=1 00:28:10.080 --rc genhtml_legend=1 00:28:10.080 --rc geninfo_all_blocks=1 00:28:10.080 --rc geninfo_unexecuted_blocks=1 00:28:10.080 00:28:10.080 ' 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:10.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.080 --rc genhtml_branch_coverage=1 00:28:10.080 --rc genhtml_function_coverage=1 00:28:10.080 --rc genhtml_legend=1 00:28:10.080 --rc geninfo_all_blocks=1 00:28:10.080 --rc geninfo_unexecuted_blocks=1 00:28:10.080 00:28:10.080 ' 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:10.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.080 --rc genhtml_branch_coverage=1 00:28:10.080 --rc genhtml_function_coverage=1 00:28:10.080 --rc genhtml_legend=1 00:28:10.080 --rc geninfo_all_blocks=1 00:28:10.080 --rc geninfo_unexecuted_blocks=1 00:28:10.080 00:28:10.080 ' 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:28:10.080 05:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:15.352 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:15.352 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:28:15.352 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:15.352 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:15.352 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:15.352 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:15.352 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:15.352 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:28:15.352 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:15.352 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:28:15.352 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:15.353 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:15.353 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:15.353 Found net devices under 0000:86:00.0: cvl_0_0 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:15.353 Found net devices under 0000:86:00.1: cvl_0_1 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:15.353 05:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:15.613 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:15.613 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:15.613 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:15.613 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:15.613 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:15.613 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:15.613 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:15.613 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:15.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:15.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:28:15.613 00:28:15.613 --- 10.0.0.2 ping statistics --- 00:28:15.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.613 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:28:15.613 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:15.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:15.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:28:15.613 00:28:15.613 --- 10.0.0.1 ping statistics --- 00:28:15.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.613 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:28:15.613 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:15.613 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:28:15.613 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:15.614 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:15.614 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:15.614 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:15.614 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:15.614 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:15.614 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:15.614 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:28:15.614 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:15.614 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:15.614 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:15.614 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3772482 00:28:15.614 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3772482 00:28:15.614 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:15.614 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3772482 ']' 00:28:15.614 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.614 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:15.614 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.614 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:15.614 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:15.614 [2024-12-09 05:22:52.249902] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:15.614 [2024-12-09 05:22:52.250867] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:28:15.614 [2024-12-09 05:22:52.250903] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.873 [2024-12-09 05:22:52.319925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.873 [2024-12-09 05:22:52.361488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.873 [2024-12-09 05:22:52.361525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.873 [2024-12-09 05:22:52.361532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:15.873 [2024-12-09 05:22:52.361539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:15.873 [2024-12-09 05:22:52.361544] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.873 [2024-12-09 05:22:52.362115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.873 [2024-12-09 05:22:52.432155] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:15.873 [2024-12-09 05:22:52.432379] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:15.873 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:15.873 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:28:15.873 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:15.873 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:15.873 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:15.873 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.873 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:16.132 [2024-12-09 05:22:52.658761] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.132 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:28:16.132 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:16.132 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:16.132 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:16.132 ************************************ 00:28:16.132 START TEST lvs_grow_clean 00:28:16.132 ************************************ 00:28:16.132 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:28:16.132 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:16.132 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:16.132 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:16.132 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:16.132 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:16.132 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:16.132 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:16.132 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:16.132 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:16.391 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:16.391 05:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:16.650 05:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4d3f1e74-e774-4ba7-bd0a-cbe930c4f52a 00:28:16.650 05:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d3f1e74-e774-4ba7-bd0a-cbe930c4f52a 00:28:16.650 05:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:16.909 05:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:16.909 05:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:16.909 05:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4d3f1e74-e774-4ba7-bd0a-cbe930c4f52a lvol 150 00:28:16.909 05:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=137d5721-2b4a-4ca8-ae61-47cd4008abe0 00:28:16.909 05:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:16.909 05:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:17.168 [2024-12-09 05:22:53.698504] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:17.168 [2024-12-09 05:22:53.698629] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:17.168 true 00:28:17.168 05:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d3f1e74-e774-4ba7-bd0a-cbe930c4f52a 00:28:17.168 05:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:17.427 05:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:17.427 05:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:17.686 05:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 137d5721-2b4a-4ca8-ae61-47cd4008abe0 00:28:17.686 05:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:17.945 [2024-12-09 05:22:54.495026] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.945 05:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:18.205 05:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3772973 00:28:18.205 05:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:18.205 05:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:18.205 05:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3772973 /var/tmp/bdevperf.sock 00:28:18.205 05:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3772973 ']' 00:28:18.205 05:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:18.205 05:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:18.205 05:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:18.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:18.205 05:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:18.205 05:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:18.205 [2024-12-09 05:22:54.779772] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:28:18.205 [2024-12-09 05:22:54.779821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3772973 ] 00:28:18.205 [2024-12-09 05:22:54.844730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.463 [2024-12-09 05:22:54.887801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.463 05:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:18.464 05:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:28:18.464 05:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:18.721 Nvme0n1 00:28:18.721 05:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:18.979 [ 00:28:18.979 { 00:28:18.979 "name": "Nvme0n1", 00:28:18.979 "aliases": [ 00:28:18.979 "137d5721-2b4a-4ca8-ae61-47cd4008abe0" 00:28:18.979 ], 00:28:18.979 "product_name": "NVMe disk", 00:28:18.979 "block_size": 4096, 00:28:18.979 "num_blocks": 38912, 00:28:18.979 "uuid": "137d5721-2b4a-4ca8-ae61-47cd4008abe0", 00:28:18.979 "numa_id": 1, 00:28:18.979 "assigned_rate_limits": { 00:28:18.979 "rw_ios_per_sec": 0, 00:28:18.979 "rw_mbytes_per_sec": 0, 00:28:18.979 "r_mbytes_per_sec": 0, 00:28:18.979 "w_mbytes_per_sec": 0 00:28:18.979 }, 00:28:18.979 "claimed": false, 00:28:18.979 "zoned": false, 00:28:18.979 "supported_io_types": { 00:28:18.979 "read": true, 00:28:18.979 "write": true, 00:28:18.979 "unmap": true, 00:28:18.979 "flush": true, 00:28:18.979 "reset": true, 00:28:18.979 "nvme_admin": true, 00:28:18.979 "nvme_io": true, 00:28:18.979 "nvme_io_md": false, 00:28:18.979 "write_zeroes": true, 00:28:18.979 "zcopy": false, 00:28:18.979 "get_zone_info": false, 00:28:18.979 "zone_management": false, 00:28:18.979 "zone_append": false, 00:28:18.979 "compare": true, 00:28:18.979 "compare_and_write": true, 00:28:18.979 "abort": true, 00:28:18.979 "seek_hole": false, 00:28:18.979 "seek_data": false, 00:28:18.979 "copy": true, 00:28:18.979 "nvme_iov_md": false 00:28:18.979 }, 00:28:18.979 "memory_domains": [ 00:28:18.979 { 00:28:18.979 "dma_device_id": "system", 00:28:18.979 "dma_device_type": 1 00:28:18.979 } 00:28:18.979 ], 00:28:18.979 "driver_specific": { 00:28:18.979 "nvme": [ 00:28:18.979 { 00:28:18.979 "trid": { 00:28:18.979 "trtype": "TCP", 00:28:18.979 "adrfam": "IPv4", 00:28:18.979 "traddr": "10.0.0.2", 00:28:18.979 "trsvcid": "4420", 00:28:18.979 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:18.979 }, 00:28:18.979 "ctrlr_data": { 00:28:18.979 "cntlid": 1, 00:28:18.979 "vendor_id": "0x8086", 00:28:18.979 "model_number": "SPDK bdev Controller", 00:28:18.979 "serial_number": "SPDK0", 00:28:18.979 "firmware_revision": "25.01", 00:28:18.979 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:18.979 "oacs": { 00:28:18.979 "security": 0, 00:28:18.979 "format": 0, 00:28:18.979 "firmware": 0, 00:28:18.979 "ns_manage": 0 00:28:18.979 }, 00:28:18.979 "multi_ctrlr": true, 00:28:18.979 "ana_reporting": false 00:28:18.979 }, 00:28:18.979 "vs": { 00:28:18.979 "nvme_version": "1.3" 00:28:18.979 }, 00:28:18.979 "ns_data": { 00:28:18.979 "id": 1, 00:28:18.979 "can_share": true 00:28:18.979 } 00:28:18.979 } 00:28:18.979 ], 00:28:18.979 "mp_policy": "active_passive" 00:28:18.979 } 00:28:18.979 } 00:28:18.979 ] 00:28:18.979 05:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3773189 00:28:18.979 05:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:18.979 05:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:18.979 Running I/O for 10 seconds... 00:28:19.913 Latency(us) 00:28:19.913 [2024-12-09T04:22:56.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.913 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:19.913 Nvme0n1 : 1.00 22242.00 86.88 0.00 0.00 0.00 0.00 0.00 00:28:19.913 [2024-12-09T04:22:56.559Z] =================================================================================================================== 00:28:19.913 [2024-12-09T04:22:56.559Z] Total : 22242.00 86.88 0.00 0.00 0.00 0.00 0.00 00:28:19.913 00:28:20.871 05:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4d3f1e74-e774-4ba7-bd0a-cbe930c4f52a 00:28:21.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:21.130 Nvme0n1 : 2.00 22424.00 87.59 0.00 0.00 0.00 0.00 0.00 00:28:21.130 [2024-12-09T04:22:57.776Z] =================================================================================================================== 00:28:21.130 [2024-12-09T04:22:57.776Z] Total : 22424.00 87.59 0.00 0.00 0.00 0.00 0.00 00:28:21.130 00:28:21.130 true 00:28:21.130 05:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d3f1e74-e774-4ba7-bd0a-cbe930c4f52a 00:28:21.130 05:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:21.388 05:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:21.388 05:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:21.388 05:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3773189 00:28:21.953 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:21.953 Nvme0n1 : 3.00 22484.67 87.83 0.00 0.00 0.00 0.00 0.00 00:28:21.953 [2024-12-09T04:22:58.599Z] =================================================================================================================== 00:28:21.953 [2024-12-09T04:22:58.599Z] Total : 22484.67 87.83 0.00 0.00 0.00 0.00 0.00 00:28:21.953 00:28:23.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:23.326 Nvme0n1 : 4.00 22546.75 88.07 0.00 0.00 0.00 0.00 0.00 00:28:23.326 [2024-12-09T04:22:59.972Z] =================================================================================================================== 00:28:23.326 [2024-12-09T04:22:59.972Z] Total : 22546.75 88.07 0.00 0.00 0.00 0.00 0.00 00:28:23.326 00:28:24.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:24.258 Nvme0n1 : 5.00 22533.20 88.02 0.00 0.00 0.00 0.00 0.00 00:28:24.258 [2024-12-09T04:23:00.904Z] =================================================================================================================== 00:28:24.258 [2024-12-09T04:23:00.904Z] Total : 22533.20 88.02 0.00 0.00 0.00 0.00 0.00 00:28:24.258 00:28:25.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:25.191 Nvme0n1 : 6.00 22587.67 88.23 0.00 0.00 0.00 0.00 0.00 00:28:25.191 [2024-12-09T04:23:01.837Z] =================================================================================================================== 00:28:25.191 [2024-12-09T04:23:01.837Z] Total : 22587.67 88.23 0.00 0.00 0.00 0.00 0.00 00:28:25.191 00:28:26.125 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:26.125 Nvme0n1 : 7.00 22626.57 88.39 0.00 0.00 0.00 0.00 0.00 00:28:26.125 [2024-12-09T04:23:02.771Z] =================================================================================================================== 00:28:26.125 [2024-12-09T04:23:02.771Z] Total : 22626.57 88.39 0.00 0.00 0.00 0.00 0.00 00:28:26.125 00:28:27.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:27.059 Nvme0n1 : 8.00 22671.62 88.56 0.00 0.00 0.00 0.00 0.00 00:28:27.059 [2024-12-09T04:23:03.705Z] =================================================================================================================== 00:28:27.059 [2024-12-09T04:23:03.705Z] Total : 22671.62 88.56 0.00 0.00 0.00 0.00 0.00 00:28:27.059 00:28:28.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:28.061 Nvme0n1 : 9.00 22692.56 88.64 0.00 0.00 0.00 0.00 0.00 00:28:28.061 [2024-12-09T04:23:04.707Z] =================================================================================================================== 00:28:28.061 [2024-12-09T04:23:04.707Z] Total : 22692.56 88.64 0.00 0.00 0.00 0.00 0.00 00:28:28.061 00:28:29.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:29.001 Nvme0n1 : 10.00 22709.30 88.71 0.00 0.00 0.00 0.00 0.00 00:28:29.001 [2024-12-09T04:23:05.647Z] =================================================================================================================== 00:28:29.001 [2024-12-09T04:23:05.647Z] Total : 22709.30 88.71 0.00 0.00 0.00 0.00 0.00 00:28:29.001 00:28:29.001 00:28:29.001 Latency(us) 00:28:29.001 [2024-12-09T04:23:05.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:29.001 Nvme0n1 : 10.01 22707.18 88.70 0.00 0.00 5633.80 3333.79 15614.66 00:28:29.001 [2024-12-09T04:23:05.647Z] =================================================================================================================== 00:28:29.001 [2024-12-09T04:23:05.647Z] Total : 22707.18 88.70 0.00 0.00 5633.80 3333.79 15614.66 00:28:29.001 { 00:28:29.001 "results": [ 00:28:29.001 { 00:28:29.001 "job": "Nvme0n1", 00:28:29.001 "core_mask": "0x2", 00:28:29.001 "workload": "randwrite", 00:28:29.001 "status": "finished", 00:28:29.001 "queue_depth": 128, 00:28:29.001 "io_size": 4096, 00:28:29.001 "runtime": 10.006572, 00:28:29.001 "iops": 22707.176843378533, 00:28:29.001 "mibps": 88.6999095444474, 00:28:29.001 "io_failed": 0, 00:28:29.001 "io_timeout": 0, 00:28:29.001 "avg_latency_us": 5633.799730773506, 00:28:29.001 "min_latency_us": 3333.7878260869566, 00:28:29.001 "max_latency_us": 15614.664347826087 00:28:29.001 } 00:28:29.001 ], 00:28:29.001 "core_count": 1 00:28:29.001 } 00:28:29.001 05:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3772973 00:28:29.001 05:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3772973 ']' 00:28:29.001 05:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3772973 00:28:29.001 05:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:28:29.001 05:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:29.001 05:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3772973 00:28:29.259 05:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:29.259 05:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:29.259 05:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3772973' 00:28:29.259 killing process with pid 3772973 00:28:29.259 05:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3772973 00:28:29.259 Received shutdown signal, test time was about 10.000000 seconds 00:28:29.259 00:28:29.259 Latency(us) 00:28:29.259 [2024-12-09T04:23:05.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.259 [2024-12-09T04:23:05.905Z] =================================================================================================================== 00:28:29.259 [2024-12-09T04:23:05.905Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:29.259 05:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3772973 00:28:29.259 05:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:29.523 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:29.782 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d3f1e74-e774-4ba7-bd0a-cbe930c4f52a 00:28:29.782 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:30.040 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:30.040 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:28:30.040 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:30.040 [2024-12-09 05:23:06.610475] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:30.040 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d3f1e74-e774-4ba7-bd0a-cbe930c4f52a 00:28:30.040 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:28:30.040 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d3f1e74-e774-4ba7-bd0a-cbe930c4f52a 00:28:30.040 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:30.040 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:30.040 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:30.040 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:30.040 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:30.040 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:30.040 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:30.040 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:30.040 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d3f1e74-e774-4ba7-bd0a-cbe930c4f52a 00:28:30.298 request: 00:28:30.298 { 00:28:30.298 "uuid": "4d3f1e74-e774-4ba7-bd0a-cbe930c4f52a", 00:28:30.298 "method": "bdev_lvol_get_lvstores", 00:28:30.298 "req_id": 1 00:28:30.298 } 00:28:30.298 Got JSON-RPC error response 00:28:30.298 response: 00:28:30.298 { 00:28:30.298 "code": -19, 00:28:30.298 "message": "No such device" 00:28:30.298 } 00:28:30.298 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:28:30.298 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:30.298 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:30.298 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:30.298 05:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:30.556 aio_bdev 00:28:30.556 05:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 137d5721-2b4a-4ca8-ae61-47cd4008abe0 00:28:30.556 05:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=137d5721-2b4a-4ca8-ae61-47cd4008abe0 00:28:30.556 05:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:30.556 05:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:28:30.556 05:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:30.556 05:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:30.556 05:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:30.814 05:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 137d5721-2b4a-4ca8-ae61-47cd4008abe0 -t 2000 00:28:30.814 [ 00:28:30.814 { 00:28:30.814 "name": "137d5721-2b4a-4ca8-ae61-47cd4008abe0", 00:28:30.814 "aliases": [ 00:28:30.814 "lvs/lvol" 00:28:30.814 ], 00:28:30.814 "product_name": "Logical Volume", 00:28:30.814 "block_size": 4096, 00:28:30.814 "num_blocks": 38912, 00:28:30.814 "uuid": "137d5721-2b4a-4ca8-ae61-47cd4008abe0", 00:28:30.814 "assigned_rate_limits": { 00:28:30.814 "rw_ios_per_sec": 0, 00:28:30.814 "rw_mbytes_per_sec": 0, 00:28:30.814 "r_mbytes_per_sec": 0, 00:28:30.814 "w_mbytes_per_sec": 0 00:28:30.814 }, 00:28:30.814 "claimed": false, 00:28:30.814 "zoned": false, 00:28:30.814 "supported_io_types": { 00:28:30.814 "read": true, 00:28:30.814 "write": true, 00:28:30.814 "unmap": true, 00:28:30.814 "flush": false, 00:28:30.814 "reset": true, 00:28:30.814 "nvme_admin": false, 00:28:30.814 "nvme_io": false, 00:28:30.814 "nvme_io_md": false, 00:28:30.814 "write_zeroes": true, 00:28:30.814 "zcopy": false, 00:28:30.814 "get_zone_info": false, 00:28:30.814 "zone_management": false, 00:28:30.814 "zone_append": false, 00:28:30.814 "compare": false, 00:28:30.814 "compare_and_write": false, 00:28:30.814 "abort": false, 00:28:30.814 "seek_hole": true, 00:28:30.814 "seek_data": true, 00:28:30.814 "copy": false, 00:28:30.814 "nvme_iov_md": false 00:28:30.814 }, 00:28:30.814 "driver_specific": { 00:28:30.814 "lvol": { 00:28:30.814 "lvol_store_uuid": "4d3f1e74-e774-4ba7-bd0a-cbe930c4f52a", 00:28:30.814 "base_bdev": "aio_bdev", 00:28:30.814 "thin_provision": false, 00:28:30.814 "num_allocated_clusters": 38, 00:28:30.814 "snapshot": false, 00:28:30.814 "clone": false, 00:28:30.814 "esnap_clone": false 00:28:30.814 } 00:28:30.814 } 00:28:30.814 } 00:28:30.814 ] 00:28:30.814 05:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:28:30.814 05:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d3f1e74-e774-4ba7-bd0a-cbe930c4f52a 00:28:30.814 05:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:31.071 05:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:31.071 05:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d3f1e74-e774-4ba7-bd0a-cbe930c4f52a 00:28:31.071 05:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:31.328 05:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:31.328 05:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 137d5721-2b4a-4ca8-ae61-47cd4008abe0 00:28:31.586 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4d3f1e74-e774-4ba7-bd0a-cbe930c4f52a 00:28:31.844 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:31.844 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:31.844 00:28:31.844 real 0m15.776s 00:28:31.844 user 0m15.345s 00:28:31.844 sys 0m1.448s 00:28:31.844 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:31.844 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:31.844 ************************************ 00:28:31.844 END TEST lvs_grow_clean 00:28:31.844 ************************************ 00:28:32.101 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:28:32.101 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:32.101 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:32.101 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:32.101 ************************************ 00:28:32.101 START TEST lvs_grow_dirty 00:28:32.101 ************************************ 00:28:32.101 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:28:32.101 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:32.101 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:32.101 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:32.101 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:32.101 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:32.101 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:32.101 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:32.101 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:32.101 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:32.358 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:32.358 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:32.358 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b0a27f84-9aa3-4fc6-8dad-1c016d7852c9 00:28:32.358 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a27f84-9aa3-4fc6-8dad-1c016d7852c9 00:28:32.358 05:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:32.615 05:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:32.615 05:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:32.615 05:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b0a27f84-9aa3-4fc6-8dad-1c016d7852c9 lvol 150 00:28:32.873 05:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=88ee5f29-fde2-4c5c-bf3c-661b27f837e5 00:28:32.873 05:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:32.873 05:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:33.131 [2024-12-09 05:23:09.526427] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:33.131 [2024-12-09 05:23:09.526497] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:33.131 true 00:28:33.131 05:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a27f84-9aa3-4fc6-8dad-1c016d7852c9 00:28:33.131 05:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:33.131 05:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:33.131 05:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:33.388 05:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 88ee5f29-fde2-4c5c-bf3c-661b27f837e5 00:28:33.645 05:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:33.903 [2024-12-09 05:23:10.322954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.903 05:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:33.903 05:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:33.903 05:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3775559 00:28:33.903 05:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:33.903 05:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3775559 /var/tmp/bdevperf.sock 00:28:33.903 05:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3775559 ']' 00:28:33.903 05:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:33.903 05:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.903 05:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:33.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:33.903 05:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.160 05:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:34.160 [2024-12-09 05:23:10.582946] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:28:34.160 [2024-12-09 05:23:10.582995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3775559 ] 00:28:34.160 [2024-12-09 05:23:10.648526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.160 [2024-12-09 05:23:10.690888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.160 05:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:34.160 05:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:28:34.160 05:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:34.726 Nvme0n1 00:28:34.726 05:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:34.726 [ 00:28:34.726 { 00:28:34.726 "name": "Nvme0n1", 00:28:34.726 "aliases": [ 00:28:34.726 "88ee5f29-fde2-4c5c-bf3c-661b27f837e5" 00:28:34.726 ], 00:28:34.726 "product_name": "NVMe disk", 00:28:34.726 "block_size": 4096, 00:28:34.726 "num_blocks": 38912, 00:28:34.726 "uuid": "88ee5f29-fde2-4c5c-bf3c-661b27f837e5", 00:28:34.726 "numa_id": 1, 00:28:34.726 "assigned_rate_limits": { 00:28:34.726 "rw_ios_per_sec": 0, 00:28:34.726 "rw_mbytes_per_sec": 0, 00:28:34.726 "r_mbytes_per_sec": 0, 00:28:34.726 "w_mbytes_per_sec": 0 00:28:34.726 }, 00:28:34.726 "claimed": false, 00:28:34.726 "zoned": false, 00:28:34.726 "supported_io_types": { 00:28:34.726 "read": true, 00:28:34.726 "write": true, 00:28:34.726 "unmap": true, 00:28:34.726 "flush": true, 00:28:34.726 "reset": true, 00:28:34.726 "nvme_admin": true, 00:28:34.726 "nvme_io": true, 00:28:34.726 "nvme_io_md": false, 00:28:34.726 "write_zeroes": true, 00:28:34.726 "zcopy": false, 00:28:34.726 "get_zone_info": false, 00:28:34.726 "zone_management": false, 00:28:34.726 "zone_append": false, 00:28:34.726 "compare": true, 00:28:34.726 "compare_and_write": true, 00:28:34.726 "abort": true, 00:28:34.726 "seek_hole": false, 00:28:34.726 "seek_data": false, 00:28:34.726 "copy": true, 00:28:34.726 "nvme_iov_md": false 00:28:34.726 }, 00:28:34.726 "memory_domains": [ 00:28:34.726 { 00:28:34.726 "dma_device_id": "system", 00:28:34.726 "dma_device_type": 1 00:28:34.726 } 00:28:34.726 ], 00:28:34.726 "driver_specific": { 00:28:34.726 "nvme": [ 00:28:34.726 { 00:28:34.726 "trid": { 00:28:34.726 "trtype": "TCP", 00:28:34.726 "adrfam": "IPv4", 00:28:34.726 "traddr": "10.0.0.2", 00:28:34.726 "trsvcid": "4420", 00:28:34.726 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:34.726 }, 00:28:34.726 "ctrlr_data": { 00:28:34.726 "cntlid": 1, 00:28:34.726 "vendor_id": "0x8086", 00:28:34.726 "model_number": "SPDK bdev Controller", 00:28:34.726 "serial_number": "SPDK0", 00:28:34.726 "firmware_revision": "25.01", 00:28:34.726 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:34.726 "oacs": { 00:28:34.726 "security": 0, 00:28:34.726 "format": 0, 00:28:34.726 "firmware": 0, 00:28:34.726 "ns_manage": 0 00:28:34.726 }, 00:28:34.726 "multi_ctrlr": true, 00:28:34.726 "ana_reporting": false 00:28:34.726 }, 00:28:34.726 "vs": { 00:28:34.726 "nvme_version": "1.3" 00:28:34.726 }, 00:28:34.726 "ns_data": { 00:28:34.726 "id": 1, 00:28:34.726 "can_share": true 00:28:34.726 } 00:28:34.726 } 00:28:34.726 ], 00:28:34.726 "mp_policy": "active_passive" 00:28:34.726 } 00:28:34.726 } 00:28:34.726 ] 00:28:34.726 05:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3775788 00:28:34.726 05:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:34.726 05:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:34.985 Running I/O for 10 seconds... 00:28:35.920 Latency(us) 00:28:35.920 [2024-12-09T04:23:12.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:35.920 Nvme0n1 : 1.00 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:28:35.920 [2024-12-09T04:23:12.566Z] =================================================================================================================== 00:28:35.920 [2024-12-09T04:23:12.566Z] Total : 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:28:35.920 00:28:36.854 05:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b0a27f84-9aa3-4fc6-8dad-1c016d7852c9 00:28:36.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:36.854 Nvme0n1 : 2.00 22511.00 87.93 0.00 0.00 0.00 0.00 0.00 00:28:36.854 [2024-12-09T04:23:13.500Z] =================================================================================================================== 00:28:36.854 [2024-12-09T04:23:13.500Z] Total : 22511.00 87.93 0.00 0.00 0.00 0.00 0.00 00:28:36.854 00:28:36.854 true 00:28:36.854 05:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a27f84-9aa3-4fc6-8dad-1c016d7852c9 00:28:36.854 05:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:37.112 05:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:37.112 05:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:37.112 05:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3775788 00:28:38.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:38.045 Nvme0n1 : 3.00 22580.00 88.20 0.00 0.00 0.00 0.00 0.00 00:28:38.045 [2024-12-09T04:23:14.691Z] =================================================================================================================== 00:28:38.045 [2024-12-09T04:23:14.691Z] Total : 22580.00 88.20 0.00 0.00 0.00 0.00 0.00 00:28:38.045 00:28:38.979 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:38.979 Nvme0n1 : 4.00 22634.25 88.42 0.00 0.00 0.00 0.00 0.00 00:28:38.979 [2024-12-09T04:23:15.625Z] =================================================================================================================== 00:28:38.979 [2024-12-09T04:23:15.625Z] Total : 22634.25 88.42 0.00 0.00 0.00 0.00 0.00 00:28:38.979 00:28:39.915 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:39.915 Nvme0n1 : 5.00 22676.40 88.58 0.00 0.00 0.00 0.00 0.00 00:28:39.915 [2024-12-09T04:23:16.561Z] =================================================================================================================== 00:28:39.915 [2024-12-09T04:23:16.561Z] Total : 22676.40 88.58 0.00 0.00 0.00 0.00 0.00 00:28:39.915 00:28:40.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:40.848 Nvme0n1 : 6.00 22707.00 88.70 0.00 0.00 0.00 0.00 0.00 00:28:40.848 [2024-12-09T04:23:17.494Z] =================================================================================================================== 00:28:40.848 [2024-12-09T04:23:17.494Z] Total : 22707.00 88.70 0.00 0.00 0.00 0.00 0.00 00:28:40.848 00:28:41.803 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:41.803 Nvme0n1 : 7.00 22728.86 88.78 0.00 0.00 0.00 0.00 0.00 00:28:41.804 [2024-12-09T04:23:18.450Z] =================================================================================================================== 00:28:41.804 [2024-12-09T04:23:18.450Z] Total : 22728.86 88.78 0.00 0.00 0.00 0.00 0.00 00:28:41.804 00:28:43.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:43.175 Nvme0n1 : 8.00 22753.25 88.88 0.00 0.00 0.00 0.00 0.00 00:28:43.175 [2024-12-09T04:23:19.821Z] =================================================================================================================== 00:28:43.175 [2024-12-09T04:23:19.821Z] Total : 22753.25 88.88 0.00 0.00 0.00 0.00 0.00 00:28:43.175 00:28:44.108 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:44.108 Nvme0n1 : 9.00 22751.22 88.87 0.00 0.00 0.00 0.00 0.00 00:28:44.108 [2024-12-09T04:23:20.754Z] =================================================================================================================== 00:28:44.108 [2024-12-09T04:23:20.754Z] Total : 22751.22 88.87 0.00 0.00 0.00 0.00 0.00 00:28:44.108 00:28:45.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:45.043 Nvme0n1 : 10.00 22760.90 88.91 0.00 0.00 0.00 0.00 0.00 00:28:45.043 [2024-12-09T04:23:21.689Z] =================================================================================================================== 00:28:45.043 [2024-12-09T04:23:21.689Z] Total : 22760.90 88.91 0.00 0.00 0.00 0.00 0.00 00:28:45.043 00:28:45.043 00:28:45.043 Latency(us) 00:28:45.043 [2024-12-09T04:23:21.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:45.043 Nvme0n1 : 10.00 22756.16 88.89 0.00 0.00 5621.26 2934.87 15272.74 00:28:45.043 [2024-12-09T04:23:21.689Z] =================================================================================================================== 00:28:45.043 [2024-12-09T04:23:21.689Z] Total : 22756.16 88.89 0.00 0.00 5621.26 2934.87 15272.74 00:28:45.043 { 00:28:45.043 "results": [ 00:28:45.043 { 00:28:45.043 "job": "Nvme0n1", 00:28:45.043 "core_mask": "0x2", 00:28:45.043 "workload": "randwrite", 00:28:45.044 "status": "finished", 00:28:45.044 "queue_depth": 128, 00:28:45.044 "io_size": 4096, 00:28:45.044 "runtime": 10.002655, 00:28:45.044 "iops": 22756.158239987282, 00:28:45.044 "mibps": 88.89124312495032, 00:28:45.044 "io_failed": 0, 00:28:45.044 "io_timeout": 0, 00:28:45.044 "avg_latency_us": 5621.257175951128, 00:28:45.044 "min_latency_us": 2934.873043478261, 00:28:45.044 "max_latency_us": 15272.737391304348 00:28:45.044 } 00:28:45.044 ], 00:28:45.044 "core_count": 1 00:28:45.044 } 00:28:45.044 05:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3775559 00:28:45.044 05:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3775559 ']' 00:28:45.044 05:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3775559 00:28:45.044 05:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:28:45.044 05:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:45.044 05:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3775559 00:28:45.044 05:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:45.044 05:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:45.044 05:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3775559' 00:28:45.044 killing process with pid 3775559 00:28:45.044 05:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3775559 00:28:45.044 Received shutdown signal, test time was about 10.000000 seconds 00:28:45.044 00:28:45.044 Latency(us) 00:28:45.044 [2024-12-09T04:23:21.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.044 [2024-12-09T04:23:21.690Z] =================================================================================================================== 00:28:45.044 [2024-12-09T04:23:21.690Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:45.044 05:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3775559 00:28:45.302 05:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:45.303 05:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:45.561 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a27f84-9aa3-4fc6-8dad-1c016d7852c9 00:28:45.561 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:45.820 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:45.820 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:28:45.820 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3772482 00:28:45.820 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3772482 00:28:45.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3772482 Killed "${NVMF_APP[@]}" "$@" 00:28:45.820 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:28:45.820 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:28:45.820 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:45.820 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:45.820 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:45.820 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3777405 00:28:45.820 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3777405 00:28:45.820 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:45.820 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3777405 ']' 00:28:45.820 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.820 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.820 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.820 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.820 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:45.820 [2024-12-09 05:23:22.403272] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:45.820 [2024-12-09 05:23:22.404179] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:28:45.820 [2024-12-09 05:23:22.404215] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.079 [2024-12-09 05:23:22.473665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.079 [2024-12-09 05:23:22.514424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.079 [2024-12-09 05:23:22.514460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.079 [2024-12-09 05:23:22.514468] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.079 [2024-12-09 05:23:22.514474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.079 [2024-12-09 05:23:22.514479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.079 [2024-12-09 05:23:22.515045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.079 [2024-12-09 05:23:22.583355] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:46.079 [2024-12-09 05:23:22.583579] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:46.079 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.079 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:28:46.079 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:46.079 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:46.080 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:46.080 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:46.080 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:46.338 [2024-12-09 05:23:22.817972] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:46.338 [2024-12-09 05:23:22.818087] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:46.338 [2024-12-09 05:23:22.818126] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:46.338 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:28:46.338 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 88ee5f29-fde2-4c5c-bf3c-661b27f837e5 00:28:46.338 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=88ee5f29-fde2-4c5c-bf3c-661b27f837e5 00:28:46.338 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:46.338 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:46.338 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:46.338 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:46.338 05:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:46.597 05:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 88ee5f29-fde2-4c5c-bf3c-661b27f837e5 -t 2000 00:28:46.597 [ 00:28:46.597 { 00:28:46.597 "name": "88ee5f29-fde2-4c5c-bf3c-661b27f837e5", 00:28:46.597 "aliases": [ 00:28:46.597 "lvs/lvol" 00:28:46.597 ], 00:28:46.597 "product_name": "Logical Volume", 00:28:46.597 "block_size": 4096, 00:28:46.597 "num_blocks": 38912, 00:28:46.597 "uuid": "88ee5f29-fde2-4c5c-bf3c-661b27f837e5", 00:28:46.597 "assigned_rate_limits": { 00:28:46.597 "rw_ios_per_sec": 0, 00:28:46.597 "rw_mbytes_per_sec": 0, 00:28:46.597 "r_mbytes_per_sec": 0, 00:28:46.597 "w_mbytes_per_sec": 0 00:28:46.597 }, 00:28:46.597 "claimed": false, 00:28:46.597 "zoned": false, 00:28:46.597 "supported_io_types": { 00:28:46.597 "read": true, 00:28:46.597 "write": true, 00:28:46.597 "unmap": true, 00:28:46.597 "flush": false, 00:28:46.597 "reset": true, 00:28:46.597 "nvme_admin": false, 00:28:46.597 "nvme_io": false, 00:28:46.597 "nvme_io_md": false, 00:28:46.597 "write_zeroes": true, 00:28:46.597 "zcopy": false, 00:28:46.597 "get_zone_info": false, 00:28:46.597 "zone_management": false, 00:28:46.597 "zone_append": false, 00:28:46.597 "compare": false, 00:28:46.597 "compare_and_write": false, 00:28:46.597 "abort": false, 00:28:46.597 "seek_hole": true, 00:28:46.597 "seek_data": true, 00:28:46.597 "copy": false, 00:28:46.597 "nvme_iov_md": false 00:28:46.597 }, 00:28:46.597 "driver_specific": { 00:28:46.597 "lvol": { 00:28:46.597 "lvol_store_uuid": "b0a27f84-9aa3-4fc6-8dad-1c016d7852c9", 00:28:46.597 "base_bdev": "aio_bdev", 00:28:46.597 "thin_provision": false, 00:28:46.597 "num_allocated_clusters": 38, 00:28:46.597 "snapshot": false, 00:28:46.597 "clone": false, 00:28:46.597 "esnap_clone": false 00:28:46.597 } 00:28:46.597 } 00:28:46.597 } 00:28:46.597 ] 00:28:46.597 05:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:46.597 05:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a27f84-9aa3-4fc6-8dad-1c016d7852c9 00:28:46.597 05:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:28:46.856 05:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:28:46.856 05:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a27f84-9aa3-4fc6-8dad-1c016d7852c9 00:28:46.856 05:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:28:47.116 05:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:28:47.116 05:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:47.376 [2024-12-09 05:23:23.775411] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:47.376 05:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a27f84-9aa3-4fc6-8dad-1c016d7852c9 00:28:47.376 05:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:28:47.376 05:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a27f84-9aa3-4fc6-8dad-1c016d7852c9 00:28:47.376 05:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:47.376 05:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:47.376 05:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:47.376 05:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:47.376 05:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:47.376 05:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:47.376 05:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:47.376 05:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:47.376 05:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a27f84-9aa3-4fc6-8dad-1c016d7852c9 00:28:47.376 request: 00:28:47.376 { 00:28:47.376 "uuid": "b0a27f84-9aa3-4fc6-8dad-1c016d7852c9", 00:28:47.376 "method": "bdev_lvol_get_lvstores", 00:28:47.376 "req_id": 1 00:28:47.376 } 00:28:47.376 Got JSON-RPC error response 00:28:47.376 response: 00:28:47.376 { 00:28:47.376 "code": -19, 00:28:47.376 "message": "No such device" 00:28:47.376 } 00:28:47.376 05:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:28:47.376 05:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:47.376 05:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:47.376 05:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:47.376 05:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:47.635 aio_bdev 00:28:47.635 05:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 88ee5f29-fde2-4c5c-bf3c-661b27f837e5 00:28:47.635 05:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=88ee5f29-fde2-4c5c-bf3c-661b27f837e5 00:28:47.635 05:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:47.635 05:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:47.635 05:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:47.635 05:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:47.635 05:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:47.894 05:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 88ee5f29-fde2-4c5c-bf3c-661b27f837e5 -t 2000 00:28:48.153 [ 00:28:48.153 { 00:28:48.153 "name": "88ee5f29-fde2-4c5c-bf3c-661b27f837e5", 00:28:48.153 "aliases": [ 00:28:48.153 "lvs/lvol" 00:28:48.153 ], 00:28:48.153 "product_name": "Logical Volume", 00:28:48.153 "block_size": 4096, 00:28:48.153 "num_blocks": 38912, 00:28:48.153 "uuid": "88ee5f29-fde2-4c5c-bf3c-661b27f837e5", 00:28:48.153 "assigned_rate_limits": { 00:28:48.153 "rw_ios_per_sec": 0, 00:28:48.153 "rw_mbytes_per_sec": 0, 00:28:48.153 "r_mbytes_per_sec": 0, 00:28:48.153 "w_mbytes_per_sec": 0 00:28:48.153 }, 00:28:48.153 "claimed": false, 00:28:48.153 "zoned": false, 00:28:48.153 "supported_io_types": { 00:28:48.153 "read": true, 00:28:48.153 "write": true, 00:28:48.153 "unmap": true, 00:28:48.153 "flush": false, 00:28:48.153 "reset": true, 00:28:48.153 "nvme_admin": false, 00:28:48.153 "nvme_io": false, 00:28:48.153 "nvme_io_md": false, 00:28:48.153 "write_zeroes": true, 00:28:48.153 "zcopy": false, 00:28:48.153 "get_zone_info": false, 00:28:48.153 "zone_management": false, 00:28:48.153 "zone_append": false, 00:28:48.153 "compare": false, 00:28:48.153 "compare_and_write": false, 00:28:48.153 "abort": false, 00:28:48.153 "seek_hole": true, 00:28:48.153 "seek_data": true, 00:28:48.153 "copy": false, 00:28:48.153 "nvme_iov_md": false 00:28:48.153 }, 00:28:48.153 "driver_specific": { 00:28:48.153 "lvol": { 00:28:48.153 "lvol_store_uuid": "b0a27f84-9aa3-4fc6-8dad-1c016d7852c9", 00:28:48.153 "base_bdev": "aio_bdev", 00:28:48.153 "thin_provision": false, 00:28:48.153 "num_allocated_clusters": 38, 00:28:48.153 "snapshot": false, 00:28:48.153 "clone": false, 00:28:48.153 "esnap_clone": false 00:28:48.153 } 00:28:48.153 } 00:28:48.153 } 00:28:48.153 ] 00:28:48.153 05:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:48.153 05:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a27f84-9aa3-4fc6-8dad-1c016d7852c9 00:28:48.153 05:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:48.412 05:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:48.412 05:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0a27f84-9aa3-4fc6-8dad-1c016d7852c9 00:28:48.412 05:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:48.412 05:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:48.412 05:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 88ee5f29-fde2-4c5c-bf3c-661b27f837e5 00:28:48.681 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b0a27f84-9aa3-4fc6-8dad-1c016d7852c9 00:28:48.941 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:49.199 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:49.199 00:28:49.199 real 0m17.089s 00:28:49.199 user 0m34.489s 00:28:49.199 sys 0m3.797s 00:28:49.199 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:49.199 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:49.199 ************************************ 00:28:49.199 END TEST lvs_grow_dirty 00:28:49.199 ************************************ 00:28:49.199 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:28:49.199 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:28:49.199 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:28:49.199 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:28:49.199 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:28:49.199 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:28:49.199 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:28:49.199 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:28:49.199 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:28:49.200 nvmf_trace.0 00:28:49.200 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:28:49.200 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:28:49.200 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:49.200 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:28:49.200 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:49.200 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:28:49.200 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:49.200 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:49.200 rmmod nvme_tcp 00:28:49.200 rmmod nvme_fabrics 00:28:49.200 rmmod nvme_keyring 00:28:49.200 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:49.200 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:28:49.200 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:28:49.200 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3777405 ']' 00:28:49.200 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3777405 00:28:49.200 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3777405 ']' 00:28:49.200 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3777405 00:28:49.200 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:28:49.200 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:49.200 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3777405 00:28:49.459 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:49.459 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:49.459 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3777405' 00:28:49.459 killing process with pid 3777405 00:28:49.459 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3777405 00:28:49.459 05:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3777405 00:28:49.459 05:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:49.459 05:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:49.459 05:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:49.459 05:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:28:49.459 05:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:28:49.459 05:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:49.459 05:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:28:49.459 05:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:49.459 05:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:49.459 05:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.459 05:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.459 05:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:51.998 00:28:51.998 real 0m41.682s 00:28:51.998 user 0m52.253s 00:28:51.998 sys 0m9.865s 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:51.998 ************************************ 00:28:51.998 END TEST nvmf_lvs_grow 00:28:51.998 ************************************ 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:51.998 ************************************ 00:28:51.998 START TEST nvmf_bdev_io_wait 00:28:51.998 ************************************ 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:51.998 * Looking for test storage... 00:28:51.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:51.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.998 --rc genhtml_branch_coverage=1 00:28:51.998 --rc genhtml_function_coverage=1 00:28:51.998 --rc genhtml_legend=1 00:28:51.998 --rc geninfo_all_blocks=1 00:28:51.998 --rc geninfo_unexecuted_blocks=1 00:28:51.998 00:28:51.998 ' 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:51.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.998 --rc genhtml_branch_coverage=1 00:28:51.998 --rc genhtml_function_coverage=1 00:28:51.998 --rc genhtml_legend=1 00:28:51.998 --rc geninfo_all_blocks=1 00:28:51.998 --rc geninfo_unexecuted_blocks=1 00:28:51.998 00:28:51.998 ' 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:51.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.998 --rc genhtml_branch_coverage=1 00:28:51.998 --rc genhtml_function_coverage=1 00:28:51.998 --rc genhtml_legend=1 00:28:51.998 --rc geninfo_all_blocks=1 00:28:51.998 --rc geninfo_unexecuted_blocks=1 00:28:51.998 00:28:51.998 ' 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:51.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.998 --rc genhtml_branch_coverage=1 00:28:51.998 --rc genhtml_function_coverage=1 00:28:51.998 --rc genhtml_legend=1 00:28:51.998 --rc geninfo_all_blocks=1 00:28:51.998 --rc geninfo_unexecuted_blocks=1 00:28:51.998 00:28:51.998 ' 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:51.998 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:28:51.999 05:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:57.270 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:57.270 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:57.270 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:57.271 Found net devices under 0000:86:00.0: cvl_0_0 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:57.271 Found net devices under 0000:86:00.1: cvl_0_1 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:57.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:57.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:28:57.271 00:28:57.271 --- 10.0.0.2 ping statistics --- 00:28:57.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.271 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:57.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:57.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:28:57.271 00:28:57.271 --- 10.0.0.1 ping statistics --- 00:28:57.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.271 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3781446 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3781446 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3781446 ']' 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:28:57.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:57.271 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:57.271 [2024-12-09 05:23:33.783888] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:57.271 [2024-12-09 05:23:33.784834] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:28:57.271 [2024-12-09 05:23:33.784867] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:57.271 [2024-12-09 05:23:33.853938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:57.271 [2024-12-09 05:23:33.897791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:57.271 [2024-12-09 05:23:33.897827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:57.271 [2024-12-09 05:23:33.897835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:57.271 [2024-12-09 05:23:33.897841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:57.271 [2024-12-09 05:23:33.897846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:57.271 [2024-12-09 05:23:33.899343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.271 [2024-12-09 05:23:33.899440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:57.271 [2024-12-09 05:23:33.899650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:57.271 [2024-12-09 05:23:33.899654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.271 [2024-12-09 05:23:33.899949] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:57.531 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:57.531 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:28:57.531 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:57.531 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:57.531 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:57.531 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.531 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:28:57.531 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.531 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:57.531 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.531 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:28:57.531 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.531 05:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:57.531 [2024-12-09 05:23:34.035077] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:57.531 [2024-12-09 05:23:34.035205] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:57.531 [2024-12-09 05:23:34.035758] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:57.531 [2024-12-09 05:23:34.036218] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:57.531 [2024-12-09 05:23:34.044090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:57.531 Malloc0 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:57.531 [2024-12-09 05:23:34.096306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3781604 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3781607 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:57.531 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:57.531 { 00:28:57.531 "params": { 00:28:57.531 "name": "Nvme$subsystem", 00:28:57.531 "trtype": "$TEST_TRANSPORT", 00:28:57.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.531 "adrfam": "ipv4", 00:28:57.531 "trsvcid": "$NVMF_PORT", 00:28:57.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.532 "hdgst": ${hdgst:-false}, 00:28:57.532 "ddgst": ${ddgst:-false} 00:28:57.532 }, 00:28:57.532 "method": "bdev_nvme_attach_controller" 00:28:57.532 } 00:28:57.532 EOF 00:28:57.532 )") 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3781610 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:57.532 { 00:28:57.532 "params": { 00:28:57.532 "name": "Nvme$subsystem", 00:28:57.532 "trtype": "$TEST_TRANSPORT", 00:28:57.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.532 "adrfam": "ipv4", 00:28:57.532 "trsvcid": "$NVMF_PORT", 00:28:57.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.532 "hdgst": ${hdgst:-false}, 00:28:57.532 "ddgst": ${ddgst:-false} 00:28:57.532 }, 00:28:57.532 "method": "bdev_nvme_attach_controller" 00:28:57.532 } 00:28:57.532 EOF 00:28:57.532 )") 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3781614 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:57.532 { 00:28:57.532 "params": { 00:28:57.532 "name": "Nvme$subsystem", 00:28:57.532 "trtype": "$TEST_TRANSPORT", 00:28:57.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.532 "adrfam": "ipv4", 00:28:57.532 "trsvcid": "$NVMF_PORT", 00:28:57.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.532 "hdgst": ${hdgst:-false}, 00:28:57.532 "ddgst": ${ddgst:-false} 00:28:57.532 }, 00:28:57.532 "method": "bdev_nvme_attach_controller" 00:28:57.532 } 00:28:57.532 EOF 00:28:57.532 )") 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:57.532 { 00:28:57.532 "params": { 00:28:57.532 "name": "Nvme$subsystem", 00:28:57.532 "trtype": "$TEST_TRANSPORT", 00:28:57.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.532 "adrfam": "ipv4", 00:28:57.532 "trsvcid": "$NVMF_PORT", 00:28:57.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.532 "hdgst": ${hdgst:-false}, 00:28:57.532 "ddgst": ${ddgst:-false} 00:28:57.532 }, 00:28:57.532 "method": "bdev_nvme_attach_controller" 00:28:57.532 } 00:28:57.532 EOF 00:28:57.532 )") 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3781604 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:57.532 "params": { 00:28:57.532 "name": "Nvme1", 00:28:57.532 "trtype": "tcp", 00:28:57.532 "traddr": "10.0.0.2", 00:28:57.532 "adrfam": "ipv4", 00:28:57.532 "trsvcid": "4420", 00:28:57.532 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:57.532 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:57.532 "hdgst": false, 00:28:57.532 "ddgst": false 00:28:57.532 }, 00:28:57.532 "method": "bdev_nvme_attach_controller" 00:28:57.532 }' 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:57.532 "params": { 00:28:57.532 "name": "Nvme1", 00:28:57.532 "trtype": "tcp", 00:28:57.532 "traddr": "10.0.0.2", 00:28:57.532 "adrfam": "ipv4", 00:28:57.532 "trsvcid": "4420", 00:28:57.532 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:57.532 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:57.532 "hdgst": false, 00:28:57.532 "ddgst": false 00:28:57.532 }, 00:28:57.532 "method": "bdev_nvme_attach_controller" 00:28:57.532 }' 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:57.532 "params": { 00:28:57.532 "name": "Nvme1", 00:28:57.532 "trtype": "tcp", 00:28:57.532 "traddr": "10.0.0.2", 00:28:57.532 "adrfam": "ipv4", 00:28:57.532 "trsvcid": "4420", 00:28:57.532 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:57.532 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:57.532 "hdgst": false, 00:28:57.532 "ddgst": false 00:28:57.532 }, 00:28:57.532 "method": "bdev_nvme_attach_controller" 00:28:57.532 }' 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:57.532 05:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:57.532 "params": { 00:28:57.532 "name": "Nvme1", 00:28:57.532 "trtype": "tcp", 00:28:57.532 "traddr": "10.0.0.2", 00:28:57.532 "adrfam": "ipv4", 00:28:57.532 "trsvcid": "4420", 00:28:57.532 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:57.532 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:57.532 "hdgst": false, 00:28:57.532 "ddgst": false 00:28:57.532 }, 00:28:57.532 "method": "bdev_nvme_attach_controller" 00:28:57.532 }' 00:28:57.532 [2024-12-09 05:23:34.147660] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:28:57.532 [2024-12-09 05:23:34.147713] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:57.532 [2024-12-09 05:23:34.148931] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:28:57.532 [2024-12-09 05:23:34.148981] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:28:57.532 [2024-12-09 05:23:34.149059] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:28:57.532 [2024-12-09 05:23:34.149104] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:28:57.532 [2024-12-09 05:23:34.149197] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:28:57.532 [2024-12-09 05:23:34.149236] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:28:57.791 [2024-12-09 05:23:34.341347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.791 [2024-12-09 05:23:34.384445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:57.791 [2024-12-09 05:23:34.425509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.050 [2024-12-09 05:23:34.468599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:58.050 [2024-12-09 05:23:34.530122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.050 [2024-12-09 05:23:34.573294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:58.050 [2024-12-09 05:23:34.626645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.050 [2024-12-09 05:23:34.680602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:58.309 Running I/O for 1 seconds... 00:28:58.309 Running I/O for 1 seconds... 00:28:58.309 Running I/O for 1 seconds... 00:28:58.309 Running I/O for 1 seconds... 00:28:59.688 234872.00 IOPS, 917.47 MiB/s 00:28:59.688 Latency(us) 00:28:59.688 [2024-12-09T04:23:36.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.688 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:28:59.688 Nvme1n1 : 1.00 234509.94 916.05 0.00 0.00 543.25 236.86 1545.79 00:28:59.688 [2024-12-09T04:23:36.334Z] =================================================================================================================== 00:28:59.688 [2024-12-09T04:23:36.334Z] Total : 234509.94 916.05 0.00 0.00 543.25 236.86 1545.79 00:28:59.688 11399.00 IOPS, 44.53 MiB/s 00:28:59.688 Latency(us) 00:28:59.688 [2024-12-09T04:23:36.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.688 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:28:59.688 Nvme1n1 : 1.01 11462.80 44.78 0.00 0.00 11129.71 1681.14 12936.24 00:28:59.688 [2024-12-09T04:23:36.334Z] =================================================================================================================== 00:28:59.688 [2024-12-09T04:23:36.334Z] Total : 11462.80 44.78 0.00 0.00 11129.71 1681.14 12936.24 00:28:59.688 10237.00 IOPS, 39.99 MiB/s 00:28:59.688 Latency(us) 00:28:59.688 [2024-12-09T04:23:36.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.688 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:28:59.688 Nvme1n1 : 1.01 10292.13 40.20 0.00 0.00 12389.66 4445.05 15044.79 00:28:59.688 [2024-12-09T04:23:36.334Z] =================================================================================================================== 00:28:59.688 [2024-12-09T04:23:36.334Z] Total : 10292.13 40.20 0.00 0.00 12389.66 4445.05 15044.79 00:28:59.688 11193.00 IOPS, 43.72 MiB/s 00:28:59.688 Latency(us) 00:28:59.688 [2024-12-09T04:23:36.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.688 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:28:59.688 Nvme1n1 : 1.00 11287.44 44.09 0.00 0.00 11315.27 2364.99 16754.42 00:28:59.688 [2024-12-09T04:23:36.334Z] =================================================================================================================== 00:28:59.688 [2024-12-09T04:23:36.334Z] Total : 11287.44 44.09 0.00 0.00 11315.27 2364.99 16754.42 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3781607 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3781610 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3781614 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:59.688 rmmod nvme_tcp 00:28:59.688 rmmod nvme_fabrics 00:28:59.688 rmmod nvme_keyring 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3781446 ']' 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3781446 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3781446 ']' 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3781446 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3781446 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3781446' 00:28:59.688 killing process with pid 3781446 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3781446 00:28:59.688 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3781446 00:28:59.948 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:59.948 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:59.948 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:59.948 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:28:59.948 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:28:59.948 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:59.948 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:28:59.948 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:59.948 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:59.948 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.948 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.948 05:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.856 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:01.856 00:29:01.856 real 0m10.282s 00:29:01.856 user 0m15.695s 00:29:01.856 sys 0m6.182s 00:29:01.856 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:01.856 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:01.856 ************************************ 00:29:01.856 END TEST nvmf_bdev_io_wait 00:29:01.856 ************************************ 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:02.116 ************************************ 00:29:02.116 START TEST nvmf_queue_depth 00:29:02.116 ************************************ 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:02.116 * Looking for test storage... 00:29:02.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:02.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.116 --rc genhtml_branch_coverage=1 00:29:02.116 --rc genhtml_function_coverage=1 00:29:02.116 --rc genhtml_legend=1 00:29:02.116 --rc geninfo_all_blocks=1 00:29:02.116 --rc geninfo_unexecuted_blocks=1 00:29:02.116 00:29:02.116 ' 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:02.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.116 --rc genhtml_branch_coverage=1 00:29:02.116 --rc genhtml_function_coverage=1 00:29:02.116 --rc genhtml_legend=1 00:29:02.116 --rc geninfo_all_blocks=1 00:29:02.116 --rc geninfo_unexecuted_blocks=1 00:29:02.116 00:29:02.116 ' 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:02.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.116 --rc genhtml_branch_coverage=1 00:29:02.116 --rc genhtml_function_coverage=1 00:29:02.116 --rc genhtml_legend=1 00:29:02.116 --rc geninfo_all_blocks=1 00:29:02.116 --rc geninfo_unexecuted_blocks=1 00:29:02.116 00:29:02.116 ' 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:02.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.116 --rc genhtml_branch_coverage=1 00:29:02.116 --rc genhtml_function_coverage=1 00:29:02.116 --rc genhtml_legend=1 00:29:02.116 --rc geninfo_all_blocks=1 00:29:02.116 --rc geninfo_unexecuted_blocks=1 00:29:02.116 00:29:02.116 ' 00:29:02.116 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:02.117 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:02.376 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:29:02.376 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:29:02.376 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:02.376 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:29:02.376 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:02.376 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.376 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:02.376 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:02.376 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:02.376 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.376 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.376 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.376 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:02.376 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:02.376 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:29:02.376 05:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.643 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:07.643 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:07.644 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:07.644 Found net devices under 0000:86:00.0: cvl_0_0 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:07.644 Found net devices under 0000:86:00.1: cvl_0_1 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.644 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.903 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.903 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:07.903 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:07.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:29:07.903 00:29:07.903 --- 10.0.0.2 ping statistics --- 00:29:07.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.903 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:29:07.903 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:29:07.903 00:29:07.903 --- 10.0.0.1 ping statistics --- 00:29:07.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.903 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:29:07.903 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.903 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:29:07.903 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:07.903 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.903 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:07.903 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:07.903 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.903 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:07.904 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:07.904 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:29:07.904 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:07.904 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:07.904 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:07.904 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3785467 00:29:07.904 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3785467 00:29:07.904 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:07.904 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3785467 ']' 00:29:07.904 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.904 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.904 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.904 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.904 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:07.904 [2024-12-09 05:23:44.389975] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:07.904 [2024-12-09 05:23:44.390907] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:29:07.904 [2024-12-09 05:23:44.390940] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.904 [2024-12-09 05:23:44.463014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.904 [2024-12-09 05:23:44.503892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.904 [2024-12-09 05:23:44.503926] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.904 [2024-12-09 05:23:44.503933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.904 [2024-12-09 05:23:44.503939] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.904 [2024-12-09 05:23:44.503945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.904 [2024-12-09 05:23:44.504489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.162 [2024-12-09 05:23:44.571909] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:08.162 [2024-12-09 05:23:44.572156] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:08.162 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.162 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:29:08.162 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:08.162 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:08.162 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:08.162 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.162 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:08.162 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.162 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:08.162 [2024-12-09 05:23:44.633023] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.162 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.162 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:08.162 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.162 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:08.162 Malloc0 00:29:08.162 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.162 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:08.162 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.162 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:08.162 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.162 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:08.162 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.162 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:08.163 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.163 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:08.163 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.163 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:08.163 [2024-12-09 05:23:44.689036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.163 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.163 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3785490 00:29:08.163 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:29:08.163 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:08.163 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3785490 /var/tmp/bdevperf.sock 00:29:08.163 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3785490 ']' 00:29:08.163 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:08.163 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:08.163 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:08.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:08.163 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:08.163 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:08.163 [2024-12-09 05:23:44.737867] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:29:08.163 [2024-12-09 05:23:44.737908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3785490 ] 00:29:08.163 [2024-12-09 05:23:44.802149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.421 [2024-12-09 05:23:44.845312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.421 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.422 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:29:08.422 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:08.422 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.422 05:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:08.682 NVMe0n1 00:29:08.682 05:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.682 05:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:08.682 Running I/O for 10 seconds... 00:29:10.997 11264.00 IOPS, 44.00 MiB/s [2024-12-09T04:23:48.580Z] 11329.50 IOPS, 44.26 MiB/s [2024-12-09T04:23:49.518Z] 11607.67 IOPS, 45.34 MiB/s [2024-12-09T04:23:50.454Z] 11593.75 IOPS, 45.29 MiB/s [2024-12-09T04:23:51.391Z] 11636.80 IOPS, 45.46 MiB/s [2024-12-09T04:23:52.328Z] 11662.00 IOPS, 45.55 MiB/s [2024-12-09T04:23:53.437Z] 11708.29 IOPS, 45.74 MiB/s [2024-12-09T04:23:54.374Z] 11774.25 IOPS, 45.99 MiB/s [2024-12-09T04:23:55.311Z] 11769.67 IOPS, 45.98 MiB/s [2024-12-09T04:23:55.311Z] 11790.80 IOPS, 46.06 MiB/s 00:29:18.665 Latency(us) 00:29:18.665 [2024-12-09T04:23:55.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.665 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:29:18.665 Verification LBA range: start 0x0 length 0x4000 00:29:18.665 NVMe0n1 : 10.05 11830.70 46.21 0.00 0.00 86274.85 7151.97 56531.92 00:29:18.665 [2024-12-09T04:23:55.311Z] =================================================================================================================== 00:29:18.665 [2024-12-09T04:23:55.311Z] Total : 11830.70 46.21 0.00 0.00 86274.85 7151.97 56531.92 00:29:18.665 { 00:29:18.665 "results": [ 00:29:18.665 { 00:29:18.665 "job": "NVMe0n1", 00:29:18.665 "core_mask": "0x1", 00:29:18.665 "workload": "verify", 00:29:18.665 "status": "finished", 00:29:18.665 "verify_range": { 00:29:18.665 "start": 0, 00:29:18.665 "length": 16384 00:29:18.665 }, 00:29:18.665 "queue_depth": 1024, 00:29:18.665 "io_size": 4096, 00:29:18.665 "runtime": 10.046741, 00:29:18.665 "iops": 11830.702115243143, 00:29:18.665 "mibps": 46.213680137668526, 00:29:18.665 "io_failed": 0, 00:29:18.665 "io_timeout": 0, 00:29:18.665 "avg_latency_us": 86274.84751843968, 00:29:18.665 "min_latency_us": 7151.972173913044, 00:29:18.665 "max_latency_us": 56531.92347826087 00:29:18.665 } 00:29:18.665 ], 00:29:18.665 "core_count": 1 00:29:18.665 } 00:29:18.923 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3785490 00:29:18.923 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3785490 ']' 00:29:18.923 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3785490 00:29:18.923 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:29:18.923 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:18.923 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3785490 00:29:18.923 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:18.923 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:18.923 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3785490' 00:29:18.924 killing process with pid 3785490 00:29:18.924 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3785490 00:29:18.924 Received shutdown signal, test time was about 10.000000 seconds 00:29:18.924 00:29:18.924 Latency(us) 00:29:18.924 [2024-12-09T04:23:55.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.924 [2024-12-09T04:23:55.570Z] =================================================================================================================== 00:29:18.924 [2024-12-09T04:23:55.570Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:18.924 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3785490 00:29:18.924 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:29:18.924 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:29:18.924 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:18.924 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:29:18.924 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:18.924 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:29:18.924 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:18.924 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:19.194 rmmod nvme_tcp 00:29:19.194 rmmod nvme_fabrics 00:29:19.194 rmmod nvme_keyring 00:29:19.194 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:19.194 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:29:19.194 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:29:19.194 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3785467 ']' 00:29:19.194 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3785467 00:29:19.194 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3785467 ']' 00:29:19.194 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3785467 00:29:19.194 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:29:19.194 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:19.195 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3785467 00:29:19.195 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:19.195 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:19.195 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3785467' 00:29:19.195 killing process with pid 3785467 00:29:19.195 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3785467 00:29:19.195 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3785467 00:29:19.453 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:19.453 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:19.453 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:19.453 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:29:19.453 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:29:19.453 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:19.453 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:29:19.453 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:19.453 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:19.453 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.453 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.453 05:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.384 05:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:21.384 00:29:21.384 real 0m19.410s 00:29:21.384 user 0m22.690s 00:29:21.384 sys 0m6.118s 00:29:21.384 05:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:21.384 05:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:21.384 ************************************ 00:29:21.384 END TEST nvmf_queue_depth 00:29:21.384 ************************************ 00:29:21.384 05:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:21.384 05:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:21.384 05:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:21.384 05:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:21.384 ************************************ 00:29:21.384 START TEST nvmf_target_multipath 00:29:21.384 ************************************ 00:29:21.384 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:21.643 * Looking for test storage... 00:29:21.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:21.643 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:21.643 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:29:21.643 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:21.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.644 --rc genhtml_branch_coverage=1 00:29:21.644 --rc genhtml_function_coverage=1 00:29:21.644 --rc genhtml_legend=1 00:29:21.644 --rc geninfo_all_blocks=1 00:29:21.644 --rc geninfo_unexecuted_blocks=1 00:29:21.644 00:29:21.644 ' 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:21.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.644 --rc genhtml_branch_coverage=1 00:29:21.644 --rc genhtml_function_coverage=1 00:29:21.644 --rc genhtml_legend=1 00:29:21.644 --rc geninfo_all_blocks=1 00:29:21.644 --rc geninfo_unexecuted_blocks=1 00:29:21.644 00:29:21.644 ' 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:21.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.644 --rc genhtml_branch_coverage=1 00:29:21.644 --rc genhtml_function_coverage=1 00:29:21.644 --rc genhtml_legend=1 00:29:21.644 --rc geninfo_all_blocks=1 00:29:21.644 --rc geninfo_unexecuted_blocks=1 00:29:21.644 00:29:21.644 ' 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:21.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.644 --rc genhtml_branch_coverage=1 00:29:21.644 --rc genhtml_function_coverage=1 00:29:21.644 --rc genhtml_legend=1 00:29:21.644 --rc geninfo_all_blocks=1 00:29:21.644 --rc geninfo_unexecuted_blocks=1 00:29:21.644 00:29:21.644 ' 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:21.644 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:29:21.645 05:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:26.909 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:26.909 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:26.909 Found net devices under 0000:86:00.0: cvl_0_0 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:26.909 Found net devices under 0000:86:00.1: cvl_0_1 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:26.909 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:27.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:27.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:29:27.168 00:29:27.168 --- 10.0.0.2 ping statistics --- 00:29:27.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.168 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:27.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:27.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:29:27.168 00:29:27.168 --- 10.0.0.1 ping statistics --- 00:29:27.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.168 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:29:27.168 only one NIC for nvmf test 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:27.168 rmmod nvme_tcp 00:29:27.168 rmmod nvme_fabrics 00:29:27.168 rmmod nvme_keyring 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:27.168 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:27.169 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:27.169 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:27.169 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:27.169 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:27.169 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:27.169 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:27.169 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:27.169 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:27.169 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:27.169 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.169 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.169 05:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:29.698 00:29:29.698 real 0m7.806s 00:29:29.698 user 0m1.737s 00:29:29.698 sys 0m4.101s 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:29.698 ************************************ 00:29:29.698 END TEST nvmf_target_multipath 00:29:29.698 ************************************ 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:29.698 ************************************ 00:29:29.698 START TEST nvmf_zcopy 00:29:29.698 ************************************ 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:29.698 * Looking for test storage... 00:29:29.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:29:29.698 05:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:29:29.698 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:29.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.699 --rc genhtml_branch_coverage=1 00:29:29.699 --rc genhtml_function_coverage=1 00:29:29.699 --rc genhtml_legend=1 00:29:29.699 --rc geninfo_all_blocks=1 00:29:29.699 --rc geninfo_unexecuted_blocks=1 00:29:29.699 00:29:29.699 ' 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:29.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.699 --rc genhtml_branch_coverage=1 00:29:29.699 --rc genhtml_function_coverage=1 00:29:29.699 --rc genhtml_legend=1 00:29:29.699 --rc geninfo_all_blocks=1 00:29:29.699 --rc geninfo_unexecuted_blocks=1 00:29:29.699 00:29:29.699 ' 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:29.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.699 --rc genhtml_branch_coverage=1 00:29:29.699 --rc genhtml_function_coverage=1 00:29:29.699 --rc genhtml_legend=1 00:29:29.699 --rc geninfo_all_blocks=1 00:29:29.699 --rc geninfo_unexecuted_blocks=1 00:29:29.699 00:29:29.699 ' 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:29.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.699 --rc genhtml_branch_coverage=1 00:29:29.699 --rc genhtml_function_coverage=1 00:29:29.699 --rc genhtml_legend=1 00:29:29.699 --rc geninfo_all_blocks=1 00:29:29.699 --rc geninfo_unexecuted_blocks=1 00:29:29.699 00:29:29.699 ' 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:29:29.699 05:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:34.967 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:34.967 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:34.967 Found net devices under 0000:86:00.0: cvl_0_0 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:34.967 Found net devices under 0000:86:00.1: cvl_0_1 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:34.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:34.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:29:34.967 00:29:34.967 --- 10.0.0.2 ping statistics --- 00:29:34.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.967 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:34.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:34.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:29:34.967 00:29:34.967 --- 10.0.0.1 ping statistics --- 00:29:34.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.967 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3794545 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3794545 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3794545 ']' 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.967 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:34.967 [2024-12-09 05:24:11.528686] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:34.967 [2024-12-09 05:24:11.529588] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:29:34.967 [2024-12-09 05:24:11.529623] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.967 [2024-12-09 05:24:11.598988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.225 [2024-12-09 05:24:11.640646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.225 [2024-12-09 05:24:11.640680] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.225 [2024-12-09 05:24:11.640688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:35.225 [2024-12-09 05:24:11.640694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:35.225 [2024-12-09 05:24:11.640699] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.225 [2024-12-09 05:24:11.641253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.225 [2024-12-09 05:24:11.708889] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:35.225 [2024-12-09 05:24:11.709113] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:35.225 [2024-12-09 05:24:11.773936] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:35.225 [2024-12-09 05:24:11.790140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:35.225 malloc0 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:35.225 { 00:29:35.225 "params": { 00:29:35.225 "name": "Nvme$subsystem", 00:29:35.225 "trtype": "$TEST_TRANSPORT", 00:29:35.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.225 "adrfam": "ipv4", 00:29:35.225 "trsvcid": "$NVMF_PORT", 00:29:35.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.225 "hdgst": ${hdgst:-false}, 00:29:35.225 "ddgst": ${ddgst:-false} 00:29:35.225 }, 00:29:35.225 "method": "bdev_nvme_attach_controller" 00:29:35.225 } 00:29:35.225 EOF 00:29:35.225 )") 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:29:35.225 05:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:35.225 "params": { 00:29:35.225 "name": "Nvme1", 00:29:35.226 "trtype": "tcp", 00:29:35.226 "traddr": "10.0.0.2", 00:29:35.226 "adrfam": "ipv4", 00:29:35.226 "trsvcid": "4420", 00:29:35.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:35.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:35.226 "hdgst": false, 00:29:35.226 "ddgst": false 00:29:35.226 }, 00:29:35.226 "method": "bdev_nvme_attach_controller" 00:29:35.226 }' 00:29:35.482 [2024-12-09 05:24:11.871174] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:29:35.482 [2024-12-09 05:24:11.871217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3794669 ] 00:29:35.482 [2024-12-09 05:24:11.934779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.482 [2024-12-09 05:24:11.975733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.740 Running I/O for 10 seconds... 00:29:38.045 8209.00 IOPS, 64.13 MiB/s [2024-12-09T04:24:15.623Z] 8290.00 IOPS, 64.77 MiB/s [2024-12-09T04:24:16.555Z] 8312.33 IOPS, 64.94 MiB/s [2024-12-09T04:24:17.488Z] 8316.75 IOPS, 64.97 MiB/s [2024-12-09T04:24:18.419Z] 8328.80 IOPS, 65.07 MiB/s [2024-12-09T04:24:19.351Z] 8337.17 IOPS, 65.13 MiB/s [2024-12-09T04:24:20.723Z] 8345.14 IOPS, 65.20 MiB/s [2024-12-09T04:24:21.656Z] 8332.75 IOPS, 65.10 MiB/s [2024-12-09T04:24:22.588Z] 8337.22 IOPS, 65.13 MiB/s [2024-12-09T04:24:22.588Z] 8341.80 IOPS, 65.17 MiB/s 00:29:45.942 Latency(us) 00:29:45.942 [2024-12-09T04:24:22.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.942 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:29:45.942 Verification LBA range: start 0x0 length 0x1000 00:29:45.942 Nvme1n1 : 10.01 8342.27 65.17 0.00 0.00 15299.68 343.71 21997.30 00:29:45.942 [2024-12-09T04:24:22.588Z] =================================================================================================================== 00:29:45.942 [2024-12-09T04:24:22.588Z] Total : 8342.27 65.17 0.00 0.00 15299.68 343.71 21997.30 00:29:45.942 05:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3796275 00:29:45.942 05:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:29:45.942 05:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:45.942 05:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:29:45.942 05:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:29:45.942 05:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:29:45.942 05:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:29:45.942 05:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:45.942 05:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:45.942 { 00:29:45.942 "params": { 00:29:45.942 "name": "Nvme$subsystem", 00:29:45.942 "trtype": "$TEST_TRANSPORT", 00:29:45.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:45.942 "adrfam": "ipv4", 00:29:45.942 "trsvcid": "$NVMF_PORT", 00:29:45.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:45.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:45.942 "hdgst": ${hdgst:-false}, 00:29:45.942 "ddgst": ${ddgst:-false} 00:29:45.942 }, 00:29:45.942 "method": "bdev_nvme_attach_controller" 00:29:45.942 } 00:29:45.942 EOF 00:29:45.942 )") 00:29:45.942 05:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:29:45.942 [2024-12-09 05:24:22.529602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.942 [2024-12-09 05:24:22.529632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.942 05:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:29:45.942 05:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:29:45.942 05:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:45.942 "params": { 00:29:45.942 "name": "Nvme1", 00:29:45.942 "trtype": "tcp", 00:29:45.942 "traddr": "10.0.0.2", 00:29:45.942 "adrfam": "ipv4", 00:29:45.942 "trsvcid": "4420", 00:29:45.942 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:45.942 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:45.942 "hdgst": false, 00:29:45.942 "ddgst": false 00:29:45.942 }, 00:29:45.942 "method": "bdev_nvme_attach_controller" 00:29:45.942 }' 00:29:45.942 [2024-12-09 05:24:22.537563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.942 [2024-12-09 05:24:22.537578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.942 [2024-12-09 05:24:22.545558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.942 [2024-12-09 05:24:22.545569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.942 [2024-12-09 05:24:22.553558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.942 [2024-12-09 05:24:22.553568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.942 [2024-12-09 05:24:22.561558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.942 [2024-12-09 05:24:22.561569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.942 [2024-12-09 05:24:22.566702] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:29:45.942 [2024-12-09 05:24:22.566744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3796275 ] 00:29:45.942 [2024-12-09 05:24:22.573559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.942 [2024-12-09 05:24:22.573572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.942 [2024-12-09 05:24:22.581560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.942 [2024-12-09 05:24:22.581570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.589557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.589567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.597558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.597568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.605558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.605568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.613557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.613567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.621557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.621567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.629557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.629567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.630570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.200 [2024-12-09 05:24:22.637560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.637571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.645576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.645591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.653557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.653567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.661563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.661579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.669557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.669567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.672413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.200 [2024-12-09 05:24:22.677560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.677572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.685566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.685582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.693566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.693582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.701562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.701575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.709561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.709574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.717558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.717570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.725559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.725570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.733561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.733573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.741580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.741594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.749557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.749567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.757572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.757590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.765565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.765583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.200 [2024-12-09 05:24:22.773563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.200 [2024-12-09 05:24:22.773577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.201 [2024-12-09 05:24:22.781562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.201 [2024-12-09 05:24:22.781577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.201 [2024-12-09 05:24:22.789563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.201 [2024-12-09 05:24:22.789578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.201 [2024-12-09 05:24:22.797561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.201 [2024-12-09 05:24:22.797575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.201 [2024-12-09 05:24:22.805560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.201 [2024-12-09 05:24:22.805573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.201 [2024-12-09 05:24:22.813565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.201 [2024-12-09 05:24:22.813584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.201 [2024-12-09 05:24:22.821562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.201 [2024-12-09 05:24:22.821577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.201 Running I/O for 5 seconds... 00:29:46.201 [2024-12-09 05:24:22.835610] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.201 [2024-12-09 05:24:22.835631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:22.851229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:22.851251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:22.860426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:22.860446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:22.875684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:22.875704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:22.890617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:22.890641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:22.902122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:22.902142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:22.915545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:22.915564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:22.922453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:22.922472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:22.933651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:22.933670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:22.940593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:22.940612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:22.952973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:22.952993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:22.967312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:22.967332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:22.974562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:22.974581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:22.988848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:22.988867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:23.003655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:23.003675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:23.018468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:23.018488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:23.028946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:23.028969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:23.043173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:23.043193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:23.050599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:23.050619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:23.060214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:23.060233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:23.075348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:23.075368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:23.084119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:23.084138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.459 [2024-12-09 05:24:23.099433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.459 [2024-12-09 05:24:23.099452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.717 [2024-12-09 05:24:23.114866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.717 [2024-12-09 05:24:23.114886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.717 [2024-12-09 05:24:23.125670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.717 [2024-12-09 05:24:23.125689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.717 [2024-12-09 05:24:23.132529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.717 [2024-12-09 05:24:23.132548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.717 [2024-12-09 05:24:23.144865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.717 [2024-12-09 05:24:23.144893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.717 [2024-12-09 05:24:23.159629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.717 [2024-12-09 05:24:23.159648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.717 [2024-12-09 05:24:23.174298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.717 [2024-12-09 05:24:23.174318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.717 [2024-12-09 05:24:23.186186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.717 [2024-12-09 05:24:23.186205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.717 [2024-12-09 05:24:23.199282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.717 [2024-12-09 05:24:23.199301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.717 [2024-12-09 05:24:23.208141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.717 [2024-12-09 05:24:23.208161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.717 [2024-12-09 05:24:23.223432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.717 [2024-12-09 05:24:23.223451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.717 [2024-12-09 05:24:23.238344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.717 [2024-12-09 05:24:23.238362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.717 [2024-12-09 05:24:23.249242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.717 [2024-12-09 05:24:23.249260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.717 [2024-12-09 05:24:23.263553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.717 [2024-12-09 05:24:23.263571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.717 [2024-12-09 05:24:23.278480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.717 [2024-12-09 05:24:23.278499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.717 [2024-12-09 05:24:23.289799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.717 [2024-12-09 05:24:23.289817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.717 [2024-12-09 05:24:23.303442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.717 [2024-12-09 05:24:23.303462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.717 [2024-12-09 05:24:23.310838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.717 [2024-12-09 05:24:23.310857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.717 [2024-12-09 05:24:23.321565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.717 [2024-12-09 05:24:23.321584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.717 [2024-12-09 05:24:23.328460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.717 [2024-12-09 05:24:23.328478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.717 [2024-12-09 05:24:23.340546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.717 [2024-12-09 05:24:23.340565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.717 [2024-12-09 05:24:23.355605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.717 [2024-12-09 05:24:23.355624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.975 [2024-12-09 05:24:23.370715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.975 [2024-12-09 05:24:23.370735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.975 [2024-12-09 05:24:23.380415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.975 [2024-12-09 05:24:23.380438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.975 [2024-12-09 05:24:23.395454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.975 [2024-12-09 05:24:23.395478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.975 [2024-12-09 05:24:23.403183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.975 [2024-12-09 05:24:23.403201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.975 [2024-12-09 05:24:23.412800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.975 [2024-12-09 05:24:23.412820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.975 [2024-12-09 05:24:23.427696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.975 [2024-12-09 05:24:23.427715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.975 [2024-12-09 05:24:23.442820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.975 [2024-12-09 05:24:23.442839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.975 [2024-12-09 05:24:23.453516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.975 [2024-12-09 05:24:23.453535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.975 [2024-12-09 05:24:23.460350] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.975 [2024-12-09 05:24:23.460368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.975 [2024-12-09 05:24:23.472428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.975 [2024-12-09 05:24:23.472446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.975 [2024-12-09 05:24:23.487546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.975 [2024-12-09 05:24:23.487566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.975 [2024-12-09 05:24:23.502606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.975 [2024-12-09 05:24:23.502625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.975 [2024-12-09 05:24:23.514070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.975 [2024-12-09 05:24:23.514089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.975 [2024-12-09 05:24:23.527862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.975 [2024-12-09 05:24:23.527881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.975 [2024-12-09 05:24:23.543206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.975 [2024-12-09 05:24:23.543226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.975 [2024-12-09 05:24:23.552420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.975 [2024-12-09 05:24:23.552439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.975 [2024-12-09 05:24:23.567473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.975 [2024-12-09 05:24:23.567492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.975 [2024-12-09 05:24:23.575480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.975 [2024-12-09 05:24:23.575499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.976 [2024-12-09 05:24:23.585291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.976 [2024-12-09 05:24:23.585310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.976 [2024-12-09 05:24:23.599875] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.976 [2024-12-09 05:24:23.599898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.976 [2024-12-09 05:24:23.614716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.976 [2024-12-09 05:24:23.614739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 [2024-12-09 05:24:23.623897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.623916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 [2024-12-09 05:24:23.630642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.630660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 [2024-12-09 05:24:23.641935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.641953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 [2024-12-09 05:24:23.654900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.654919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 [2024-12-09 05:24:23.665463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.665482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 [2024-12-09 05:24:23.672300] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.672319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 [2024-12-09 05:24:23.684581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.684600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 [2024-12-09 05:24:23.699894] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.699913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 [2024-12-09 05:24:23.714931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.714950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 [2024-12-09 05:24:23.724124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.724142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 [2024-12-09 05:24:23.739214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.739233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 [2024-12-09 05:24:23.748465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.748484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 [2024-12-09 05:24:23.763752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.763770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 [2024-12-09 05:24:23.778877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.778895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 [2024-12-09 05:24:23.788342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.788361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 [2024-12-09 05:24:23.803656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.803674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 [2024-12-09 05:24:23.818958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.818978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 [2024-12-09 05:24:23.828683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.828702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 16316.00 IOPS, 127.47 MiB/s [2024-12-09T04:24:23.879Z] [2024-12-09 05:24:23.843272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.843290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 [2024-12-09 05:24:23.850570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.850588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 [2024-12-09 05:24:23.860454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.860473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.233 [2024-12-09 05:24:23.875441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.233 [2024-12-09 05:24:23.875476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.490 [2024-12-09 05:24:23.890933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.490 [2024-12-09 05:24:23.890952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.490 [2024-12-09 05:24:23.900254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.490 [2024-12-09 05:24:23.900273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.490 [2024-12-09 05:24:23.915692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.490 [2024-12-09 05:24:23.915711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.490 [2024-12-09 05:24:23.930938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.490 [2024-12-09 05:24:23.930956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.490 [2024-12-09 05:24:23.941371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.490 [2024-12-09 05:24:23.941389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.490 [2024-12-09 05:24:23.955576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.490 [2024-12-09 05:24:23.955594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.490 [2024-12-09 05:24:23.970677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.490 [2024-12-09 05:24:23.970696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.490 [2024-12-09 05:24:23.981098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.490 [2024-12-09 05:24:23.981117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.490 [2024-12-09 05:24:23.995423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.490 [2024-12-09 05:24:23.995443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.490 [2024-12-09 05:24:24.010610] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.490 [2024-12-09 05:24:24.010630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.490 [2024-12-09 05:24:24.021478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.490 [2024-12-09 05:24:24.021498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.490 [2024-12-09 05:24:24.035807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.490 [2024-12-09 05:24:24.035826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.490 [2024-12-09 05:24:24.050807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.490 [2024-12-09 05:24:24.050826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.491 [2024-12-09 05:24:24.059857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.491 [2024-12-09 05:24:24.059877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.491 [2024-12-09 05:24:24.074789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.491 [2024-12-09 05:24:24.074808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.491 [2024-12-09 05:24:24.084474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.491 [2024-12-09 05:24:24.084493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.491 [2024-12-09 05:24:24.099551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.491 [2024-12-09 05:24:24.099569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.491 [2024-12-09 05:24:24.114726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.491 [2024-12-09 05:24:24.114745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.491 [2024-12-09 05:24:24.125973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.491 [2024-12-09 05:24:24.125991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.748 [2024-12-09 05:24:24.138885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.748 [2024-12-09 05:24:24.138903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.748 [2024-12-09 05:24:24.150026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.748 [2024-12-09 05:24:24.150045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.748 [2024-12-09 05:24:24.163640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.748 [2024-12-09 05:24:24.163660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.748 [2024-12-09 05:24:24.178647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.748 [2024-12-09 05:24:24.178667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.748 [2024-12-09 05:24:24.187898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.748 [2024-12-09 05:24:24.187917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.748 [2024-12-09 05:24:24.202866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.748 [2024-12-09 05:24:24.202885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.748 [2024-12-09 05:24:24.212238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.748 [2024-12-09 05:24:24.212257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.748 [2024-12-09 05:24:24.226987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.748 [2024-12-09 05:24:24.227014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.748 [2024-12-09 05:24:24.235954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.748 [2024-12-09 05:24:24.235973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.748 [2024-12-09 05:24:24.242824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.748 [2024-12-09 05:24:24.242843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.748 [2024-12-09 05:24:24.253198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.748 [2024-12-09 05:24:24.253218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.748 [2024-12-09 05:24:24.267789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.748 [2024-12-09 05:24:24.267810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.748 [2024-12-09 05:24:24.282935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.748 [2024-12-09 05:24:24.282956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.748 [2024-12-09 05:24:24.292486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.748 [2024-12-09 05:24:24.292506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.749 [2024-12-09 05:24:24.307958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.749 [2024-12-09 05:24:24.307979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.749 [2024-12-09 05:24:24.323429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.749 [2024-12-09 05:24:24.323450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.749 [2024-12-09 05:24:24.332664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.749 [2024-12-09 05:24:24.332683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.749 [2024-12-09 05:24:24.347101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.749 [2024-12-09 05:24:24.347120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.749 [2024-12-09 05:24:24.354198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.749 [2024-12-09 05:24:24.354216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.749 [2024-12-09 05:24:24.365407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.749 [2024-12-09 05:24:24.365427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.749 [2024-12-09 05:24:24.379308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.749 [2024-12-09 05:24:24.379328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.749 [2024-12-09 05:24:24.386312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.749 [2024-12-09 05:24:24.386330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.397650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.397670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.404766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.404785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.417039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.417058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.431358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.431377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.440339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.440358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.455996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.456023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.471065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.471084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.479031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.479052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.488406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.488427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.503311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.503331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.518521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.518540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.527571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.527590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.534401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.534419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.544696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.544715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.558964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.558984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.569222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.569242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.583736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.583755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.598895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.598915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.608267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.608286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.623722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.623743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.638266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.638285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.006 [2024-12-09 05:24:24.649963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.006 [2024-12-09 05:24:24.649982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 [2024-12-09 05:24:24.663688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-09 05:24:24.663707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 [2024-12-09 05:24:24.678648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-09 05:24:24.678668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 [2024-12-09 05:24:24.690296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-09 05:24:24.690315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 [2024-12-09 05:24:24.703204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-09 05:24:24.703224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 [2024-12-09 05:24:24.710599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-09 05:24:24.710617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 [2024-12-09 05:24:24.721254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-09 05:24:24.721273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 [2024-12-09 05:24:24.735543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-09 05:24:24.735562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 [2024-12-09 05:24:24.742780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-09 05:24:24.742798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 [2024-12-09 05:24:24.753452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-09 05:24:24.753475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 [2024-12-09 05:24:24.760536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-09 05:24:24.760556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 [2024-12-09 05:24:24.772506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-09 05:24:24.772524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 [2024-12-09 05:24:24.787734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.262 [2024-12-09 05:24:24.787753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.262 [2024-12-09 05:24:24.802836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.263 [2024-12-09 05:24:24.802855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.263 [2024-12-09 05:24:24.812216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.263 [2024-12-09 05:24:24.812234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.263 [2024-12-09 05:24:24.827304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.263 [2024-12-09 05:24:24.827323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.263 16303.00 IOPS, 127.37 MiB/s [2024-12-09T04:24:24.909Z] [2024-12-09 05:24:24.836408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.263 [2024-12-09 05:24:24.836426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.263 [2024-12-09 05:24:24.851622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.263 [2024-12-09 05:24:24.851640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.263 [2024-12-09 05:24:24.866384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.263 [2024-12-09 05:24:24.866403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.263 [2024-12-09 05:24:24.876844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.263 [2024-12-09 05:24:24.876864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.263 [2024-12-09 05:24:24.891204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.263 [2024-12-09 05:24:24.891223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.263 [2024-12-09 05:24:24.900164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.263 [2024-12-09 05:24:24.900183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.520 [2024-12-09 05:24:24.915177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.520 [2024-12-09 05:24:24.915196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.520 [2024-12-09 05:24:24.924181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.520 [2024-12-09 05:24:24.924199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.520 [2024-12-09 05:24:24.938995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.520 [2024-12-09 05:24:24.939019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.520 [2024-12-09 05:24:24.948267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.520 [2024-12-09 05:24:24.948285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.520 [2024-12-09 05:24:24.963468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.520 [2024-12-09 05:24:24.963487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.520 [2024-12-09 05:24:24.972470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.520 [2024-12-09 05:24:24.972488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.520 [2024-12-09 05:24:24.987168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.520 [2024-12-09 05:24:24.987191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.520 [2024-12-09 05:24:24.996123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.520 [2024-12-09 05:24:24.996142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.520 [2024-12-09 05:24:25.011140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.520 [2024-12-09 05:24:25.011159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.520 [2024-12-09 05:24:25.020596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.521 [2024-12-09 05:24:25.020615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.521 [2024-12-09 05:24:25.035698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.521 [2024-12-09 05:24:25.035716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.521 [2024-12-09 05:24:25.050812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.521 [2024-12-09 05:24:25.050831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.521 [2024-12-09 05:24:25.060079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.521 [2024-12-09 05:24:25.060098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.521 [2024-12-09 05:24:25.067150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.521 [2024-12-09 05:24:25.067169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.521 [2024-12-09 05:24:25.076856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.521 [2024-12-09 05:24:25.076876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.521 [2024-12-09 05:24:25.091747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.521 [2024-12-09 05:24:25.091766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.521 [2024-12-09 05:24:25.106633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.521 [2024-12-09 05:24:25.106651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.521 [2024-12-09 05:24:25.117355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.521 [2024-12-09 05:24:25.117374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.521 [2024-12-09 05:24:25.131799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.521 [2024-12-09 05:24:25.131818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.521 [2024-12-09 05:24:25.146806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.521 [2024-12-09 05:24:25.146826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.521 [2024-12-09 05:24:25.155940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.521 [2024-12-09 05:24:25.155958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.521 [2024-12-09 05:24:25.163098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.521 [2024-12-09 05:24:25.163118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.171821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.171839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.186801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.186820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.195850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.195869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.202759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.202782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.213499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.213518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.220206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.220225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.234711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.234730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.245667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.245685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.252659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.252677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.264745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.264764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.279425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.279443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.294309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.294328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.303809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.303828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.310606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.310624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.325168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.325186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.339430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.339450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.347280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.347299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.354876] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.354894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.366109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.366126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.380105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.380123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.395101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.395119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.402931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.402949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.778 [2024-12-09 05:24:25.412441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.778 [2024-12-09 05:24:25.412459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.427757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.427776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.442727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.442745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.452419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.452438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.467679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.467698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.482438] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.482457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.492351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.492369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.507334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.507353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.522610] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.522630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.531774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.531793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.538506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.538524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.549924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.549943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.563364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.563384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.578932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.578952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.588239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.588258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.602939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.602958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.612705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.612724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.627259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.627278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.634596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.634615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.644848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.644868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.659347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.659366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.666312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.666331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.036 [2024-12-09 05:24:25.677939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.036 [2024-12-09 05:24:25.677959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.294 [2024-12-09 05:24:25.691583] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.294 [2024-12-09 05:24:25.691602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.294 [2024-12-09 05:24:25.706826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.294 [2024-12-09 05:24:25.706847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.294 [2024-12-09 05:24:25.715864] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.294 [2024-12-09 05:24:25.715884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.294 [2024-12-09 05:24:25.722935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.294 [2024-12-09 05:24:25.722955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.294 [2024-12-09 05:24:25.733258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.294 [2024-12-09 05:24:25.733278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.294 [2024-12-09 05:24:25.747900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.294 [2024-12-09 05:24:25.747921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.294 [2024-12-09 05:24:25.763409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.294 [2024-12-09 05:24:25.763428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.294 [2024-12-09 05:24:25.778458] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.294 [2024-12-09 05:24:25.778478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.294 [2024-12-09 05:24:25.790142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.294 [2024-12-09 05:24:25.790161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.294 [2024-12-09 05:24:25.802978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.294 [2024-12-09 05:24:25.803004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.294 [2024-12-09 05:24:25.813617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.294 [2024-12-09 05:24:25.813636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.294 [2024-12-09 05:24:25.820574] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.294 [2024-12-09 05:24:25.820593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.294 [2024-12-09 05:24:25.832547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.294 [2024-12-09 05:24:25.832566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.294 16326.67 IOPS, 127.55 MiB/s [2024-12-09T04:24:25.940Z] [2024-12-09 05:24:25.847925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.294 [2024-12-09 05:24:25.847945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.294 [2024-12-09 05:24:25.862898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.294 [2024-12-09 05:24:25.862929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.294 [2024-12-09 05:24:25.872335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.294 [2024-12-09 05:24:25.872354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.294 [2024-12-09 05:24:25.887666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.294 [2024-12-09 05:24:25.887685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.294 [2024-12-09 05:24:25.902460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.294 [2024-12-09 05:24:25.902479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.294 [2024-12-09 05:24:25.912531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.294 [2024-12-09 05:24:25.912550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.294 [2024-12-09 05:24:25.927425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.294 [2024-12-09 05:24:25.927445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.552 [2024-12-09 05:24:25.942753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.552 [2024-12-09 05:24:25.942773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.552 [2024-12-09 05:24:25.952370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.552 [2024-12-09 05:24:25.952390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.552 [2024-12-09 05:24:25.967580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.552 [2024-12-09 05:24:25.967599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.552 [2024-12-09 05:24:25.975480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.552 [2024-12-09 05:24:25.975499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.552 [2024-12-09 05:24:25.990369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.552 [2024-12-09 05:24:25.990388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.552 [2024-12-09 05:24:25.999737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.552 [2024-12-09 05:24:25.999757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.552 [2024-12-09 05:24:26.006389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.552 [2024-12-09 05:24:26.006407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.552 [2024-12-09 05:24:26.017681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.552 [2024-12-09 05:24:26.017700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.552 [2024-12-09 05:24:26.024466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.552 [2024-12-09 05:24:26.024484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.552 [2024-12-09 05:24:26.035982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.552 [2024-12-09 05:24:26.036008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.552 [2024-12-09 05:24:26.050962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.552 [2024-12-09 05:24:26.050982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.552 [2024-12-09 05:24:26.060229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.552 [2024-12-09 05:24:26.060248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.552 [2024-12-09 05:24:26.075148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.552 [2024-12-09 05:24:26.075168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.552 [2024-12-09 05:24:26.084529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.552 [2024-12-09 05:24:26.084554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.552 [2024-12-09 05:24:26.099626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.552 [2024-12-09 05:24:26.099647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.552 [2024-12-09 05:24:26.114318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.552 [2024-12-09 05:24:26.114338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.552 [2024-12-09 05:24:26.123890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.552 [2024-12-09 05:24:26.123910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.553 [2024-12-09 05:24:26.130625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.553 [2024-12-09 05:24:26.130643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.553 [2024-12-09 05:24:26.141875] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.553 [2024-12-09 05:24:26.141893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.553 [2024-12-09 05:24:26.155527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.553 [2024-12-09 05:24:26.155546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.553 [2024-12-09 05:24:26.170768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.553 [2024-12-09 05:24:26.170787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.553 [2024-12-09 05:24:26.180057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.553 [2024-12-09 05:24:26.180076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.553 [2024-12-09 05:24:26.187137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.553 [2024-12-09 05:24:26.187157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.810 [2024-12-09 05:24:26.197244] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.810 [2024-12-09 05:24:26.197263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.810 [2024-12-09 05:24:26.211907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.810 [2024-12-09 05:24:26.211926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.810 [2024-12-09 05:24:26.227314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.810 [2024-12-09 05:24:26.227333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.810 [2024-12-09 05:24:26.234996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.810 [2024-12-09 05:24:26.235021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.810 [2024-12-09 05:24:26.249283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.810 [2024-12-09 05:24:26.249301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.810 [2024-12-09 05:24:26.263952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.810 [2024-12-09 05:24:26.263971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.810 [2024-12-09 05:24:26.278824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.810 [2024-12-09 05:24:26.278843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.810 [2024-12-09 05:24:26.288534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.810 [2024-12-09 05:24:26.288554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.810 [2024-12-09 05:24:26.303777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.810 [2024-12-09 05:24:26.303796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.810 [2024-12-09 05:24:26.318594] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.810 [2024-12-09 05:24:26.318618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.810 [2024-12-09 05:24:26.329482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.810 [2024-12-09 05:24:26.329503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.810 [2024-12-09 05:24:26.336551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.810 [2024-12-09 05:24:26.336570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.810 [2024-12-09 05:24:26.348826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.810 [2024-12-09 05:24:26.348844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.810 [2024-12-09 05:24:26.363439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.811 [2024-12-09 05:24:26.363457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.811 [2024-12-09 05:24:26.378415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.811 [2024-12-09 05:24:26.378435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.811 [2024-12-09 05:24:26.389309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.811 [2024-12-09 05:24:26.389329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.811 [2024-12-09 05:24:26.403703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.811 [2024-12-09 05:24:26.403722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.811 [2024-12-09 05:24:26.418633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.811 [2024-12-09 05:24:26.418652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.811 [2024-12-09 05:24:26.427895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.811 [2024-12-09 05:24:26.427914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.811 [2024-12-09 05:24:26.434423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.811 [2024-12-09 05:24:26.434441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.811 [2024-12-09 05:24:26.444920] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.811 [2024-12-09 05:24:26.444938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.069 [2024-12-09 05:24:26.459452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.069 [2024-12-09 05:24:26.459471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.069 [2024-12-09 05:24:26.474499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.069 [2024-12-09 05:24:26.474518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.069 [2024-12-09 05:24:26.485516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.069 [2024-12-09 05:24:26.485535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.069 [2024-12-09 05:24:26.492301] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.069 [2024-12-09 05:24:26.492319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.069 [2024-12-09 05:24:26.505075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.069 [2024-12-09 05:24:26.505093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.069 [2024-12-09 05:24:26.519695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.069 [2024-12-09 05:24:26.519714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.069 [2024-12-09 05:24:26.534918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.069 [2024-12-09 05:24:26.534936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.069 [2024-12-09 05:24:26.544064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.069 [2024-12-09 05:24:26.544087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.069 [2024-12-09 05:24:26.559279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.070 [2024-12-09 05:24:26.559298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.070 [2024-12-09 05:24:26.568621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.070 [2024-12-09 05:24:26.568641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.070 [2024-12-09 05:24:26.583299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.070 [2024-12-09 05:24:26.583319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.070 [2024-12-09 05:24:26.598691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.070 [2024-12-09 05:24:26.598711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.070 [2024-12-09 05:24:26.609426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.070 [2024-12-09 05:24:26.609446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.070 [2024-12-09 05:24:26.623299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.070 [2024-12-09 05:24:26.623319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.070 [2024-12-09 05:24:26.630438] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.070 [2024-12-09 05:24:26.630456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.070 [2024-12-09 05:24:26.642017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.070 [2024-12-09 05:24:26.642036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.070 [2024-12-09 05:24:26.655820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.070 [2024-12-09 05:24:26.655839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.070 [2024-12-09 05:24:26.663065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.070 [2024-12-09 05:24:26.663088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.070 [2024-12-09 05:24:26.672762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.070 [2024-12-09 05:24:26.672781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.070 [2024-12-09 05:24:26.688063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.070 [2024-12-09 05:24:26.688082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.070 [2024-12-09 05:24:26.703126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.070 [2024-12-09 05:24:26.703146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.070 [2024-12-09 05:24:26.710878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.070 [2024-12-09 05:24:26.710896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.328 [2024-12-09 05:24:26.725195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.328 [2024-12-09 05:24:26.725213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.328 [2024-12-09 05:24:26.739651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.328 [2024-12-09 05:24:26.739671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.329 [2024-12-09 05:24:26.754884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.329 [2024-12-09 05:24:26.754903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.329 [2024-12-09 05:24:26.765637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.329 [2024-12-09 05:24:26.765656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.329 [2024-12-09 05:24:26.772376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.329 [2024-12-09 05:24:26.772403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.329 [2024-12-09 05:24:26.784647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.329 [2024-12-09 05:24:26.784665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.329 [2024-12-09 05:24:26.799622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.329 [2024-12-09 05:24:26.799641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.329 [2024-12-09 05:24:26.814640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.329 [2024-12-09 05:24:26.814659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.329 [2024-12-09 05:24:26.824237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.329 [2024-12-09 05:24:26.824255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.329 16323.00 IOPS, 127.52 MiB/s [2024-12-09T04:24:26.975Z] [2024-12-09 05:24:26.839082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.329 [2024-12-09 05:24:26.839101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.329 [2024-12-09 05:24:26.848473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.329 [2024-12-09 05:24:26.848491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.329 [2024-12-09 05:24:26.863248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.329 [2024-12-09 05:24:26.863267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.329 [2024-12-09 05:24:26.872324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.329 [2024-12-09 05:24:26.872342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.329 [2024-12-09 05:24:26.887339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.329 [2024-12-09 05:24:26.887357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.329 [2024-12-09 05:24:26.902770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.329 [2024-12-09 05:24:26.902789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.329 [2024-12-09 05:24:26.911941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.329 [2024-12-09 05:24:26.911960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.329 [2024-12-09 05:24:26.926596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.329 [2024-12-09 05:24:26.926615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.329 [2024-12-09 05:24:26.935922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.329 [2024-12-09 05:24:26.935940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.329 [2024-12-09 05:24:26.942790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.329 [2024-12-09 05:24:26.942808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.329 [2024-12-09 05:24:26.952894] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.329 [2024-12-09 05:24:26.952912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.329 [2024-12-09 05:24:26.967966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.329 [2024-12-09 05:24:26.967985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.587 [2024-12-09 05:24:26.983050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.587 [2024-12-09 05:24:26.983069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.587 [2024-12-09 05:24:26.993289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.587 [2024-12-09 05:24:26.993308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.587 [2024-12-09 05:24:27.007855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.587 [2024-12-09 05:24:27.007875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.587 [2024-12-09 05:24:27.022958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.587 [2024-12-09 05:24:27.022977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.587 [2024-12-09 05:24:27.032097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.587 [2024-12-09 05:24:27.032115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.587 [2024-12-09 05:24:27.047259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.587 [2024-12-09 05:24:27.047278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.587 [2024-12-09 05:24:27.054889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.587 [2024-12-09 05:24:27.054907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.587 [2024-12-09 05:24:27.064321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.587 [2024-12-09 05:24:27.064340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.587 [2024-12-09 05:24:27.079420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.587 [2024-12-09 05:24:27.079439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.587 [2024-12-09 05:24:27.087243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.587 [2024-12-09 05:24:27.087261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.587 [2024-12-09 05:24:27.101482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.587 [2024-12-09 05:24:27.101503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.587 [2024-12-09 05:24:27.109068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.587 [2024-12-09 05:24:27.109086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.587 [2024-12-09 05:24:27.123324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.587 [2024-12-09 05:24:27.123344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.587 [2024-12-09 05:24:27.138267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.587 [2024-12-09 05:24:27.138287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.587 [2024-12-09 05:24:27.147829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.588 [2024-12-09 05:24:27.147848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.588 [2024-12-09 05:24:27.154582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.588 [2024-12-09 05:24:27.154603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.588 [2024-12-09 05:24:27.165931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.588 [2024-12-09 05:24:27.165951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.588 [2024-12-09 05:24:27.179122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.588 [2024-12-09 05:24:27.179143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.588 [2024-12-09 05:24:27.188471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.588 [2024-12-09 05:24:27.188492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.588 [2024-12-09 05:24:27.203735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.588 [2024-12-09 05:24:27.203756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.588 [2024-12-09 05:24:27.218650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.588 [2024-12-09 05:24:27.218671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.588 [2024-12-09 05:24:27.229343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.588 [2024-12-09 05:24:27.229362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.846 [2024-12-09 05:24:27.243926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.846 [2024-12-09 05:24:27.243945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.846 [2024-12-09 05:24:27.259025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.846 [2024-12-09 05:24:27.259044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.846 [2024-12-09 05:24:27.268321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.846 [2024-12-09 05:24:27.268340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.846 [2024-12-09 05:24:27.283322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.846 [2024-12-09 05:24:27.283342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.846 [2024-12-09 05:24:27.290945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.846 [2024-12-09 05:24:27.290963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.846 [2024-12-09 05:24:27.300370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.846 [2024-12-09 05:24:27.300390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.846 [2024-12-09 05:24:27.315276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.846 [2024-12-09 05:24:27.315295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.846 [2024-12-09 05:24:27.322793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.846 [2024-12-09 05:24:27.322812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.846 [2024-12-09 05:24:27.332449] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.846 [2024-12-09 05:24:27.332468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.846 [2024-12-09 05:24:27.347275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.846 [2024-12-09 05:24:27.347294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.846 [2024-12-09 05:24:27.356721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.846 [2024-12-09 05:24:27.356740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.846 [2024-12-09 05:24:27.371786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.846 [2024-12-09 05:24:27.371805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.846 [2024-12-09 05:24:27.386392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.846 [2024-12-09 05:24:27.386412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.846 [2024-12-09 05:24:27.395795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.846 [2024-12-09 05:24:27.395814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.846 [2024-12-09 05:24:27.410464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.846 [2024-12-09 05:24:27.410483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.847 [2024-12-09 05:24:27.420929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.847 [2024-12-09 05:24:27.420948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.847 [2024-12-09 05:24:27.435713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.847 [2024-12-09 05:24:27.435732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.847 [2024-12-09 05:24:27.450899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.847 [2024-12-09 05:24:27.450923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.847 [2024-12-09 05:24:27.460255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.847 [2024-12-09 05:24:27.460274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.847 [2024-12-09 05:24:27.475540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.847 [2024-12-09 05:24:27.475559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.847 [2024-12-09 05:24:27.490501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.847 [2024-12-09 05:24:27.490521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.106 [2024-12-09 05:24:27.500392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.106 [2024-12-09 05:24:27.500411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.106 [2024-12-09 05:24:27.515567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.106 [2024-12-09 05:24:27.515587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.106 [2024-12-09 05:24:27.530060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.106 [2024-12-09 05:24:27.530078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.106 [2024-12-09 05:24:27.542355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.106 [2024-12-09 05:24:27.542374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.106 [2024-12-09 05:24:27.555217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.106 [2024-12-09 05:24:27.555236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.106 [2024-12-09 05:24:27.562747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.106 [2024-12-09 05:24:27.562767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.106 [2024-12-09 05:24:27.572277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.106 [2024-12-09 05:24:27.572296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.106 [2024-12-09 05:24:27.587519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.106 [2024-12-09 05:24:27.587539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.106 [2024-12-09 05:24:27.602375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.106 [2024-12-09 05:24:27.602394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.106 [2024-12-09 05:24:27.614102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.106 [2024-12-09 05:24:27.614120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.106 [2024-12-09 05:24:27.626988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.106 [2024-12-09 05:24:27.627013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.106 [2024-12-09 05:24:27.637520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.106 [2024-12-09 05:24:27.637538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.106 [2024-12-09 05:24:27.644494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.106 [2024-12-09 05:24:27.644513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.106 [2024-12-09 05:24:27.656807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.106 [2024-12-09 05:24:27.656827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.106 [2024-12-09 05:24:27.671386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.106 [2024-12-09 05:24:27.671404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.106 [2024-12-09 05:24:27.686380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.106 [2024-12-09 05:24:27.686404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.106 [2024-12-09 05:24:27.695561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.106 [2024-12-09 05:24:27.695579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.106 [2024-12-09 05:24:27.702936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.106 [2024-12-09 05:24:27.702956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.106 [2024-12-09 05:24:27.712694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.106 [2024-12-09 05:24:27.712712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.106 [2024-12-09 05:24:27.727892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.106 [2024-12-09 05:24:27.727911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.106 [2024-12-09 05:24:27.743011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.106 [2024-12-09 05:24:27.743032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.365 [2024-12-09 05:24:27.752088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.365 [2024-12-09 05:24:27.752107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.365 [2024-12-09 05:24:27.767291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.365 [2024-12-09 05:24:27.767311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.365 [2024-12-09 05:24:27.774568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.365 [2024-12-09 05:24:27.774586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.365 [2024-12-09 05:24:27.784768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.365 [2024-12-09 05:24:27.784787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.365 [2024-12-09 05:24:27.799294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.365 [2024-12-09 05:24:27.799313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.365 [2024-12-09 05:24:27.806570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.365 [2024-12-09 05:24:27.806588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.365 [2024-12-09 05:24:27.816402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.365 [2024-12-09 05:24:27.816420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.365 [2024-12-09 05:24:27.831826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.365 [2024-12-09 05:24:27.831845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.365 16339.20 IOPS, 127.65 MiB/s [2024-12-09T04:24:28.011Z] [2024-12-09 05:24:27.845706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.366 [2024-12-09 05:24:27.845726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.366 00:29:51.366 Latency(us) 00:29:51.366 [2024-12-09T04:24:28.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.366 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:29:51.366 Nvme1n1 : 5.01 16339.38 127.65 0.00 0.00 7825.89 2151.29 13506.11 00:29:51.366 [2024-12-09T04:24:28.012Z] =================================================================================================================== 00:29:51.366 [2024-12-09T04:24:28.012Z] Total : 16339.38 127.65 0.00 0.00 7825.89 2151.29 13506.11 00:29:51.366 [2024-12-09 05:24:27.853564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.366 [2024-12-09 05:24:27.853580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.366 [2024-12-09 05:24:27.861560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.366 [2024-12-09 05:24:27.861579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.366 [2024-12-09 05:24:27.869562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.366 [2024-12-09 05:24:27.869573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.366 [2024-12-09 05:24:27.877575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.366 [2024-12-09 05:24:27.877594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.366 [2024-12-09 05:24:27.885565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.366 [2024-12-09 05:24:27.885577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.366 [2024-12-09 05:24:27.893560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.366 [2024-12-09 05:24:27.893572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.366 [2024-12-09 05:24:27.901563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.366 [2024-12-09 05:24:27.901576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.366 [2024-12-09 05:24:27.909560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.366 [2024-12-09 05:24:27.909572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.366 [2024-12-09 05:24:27.917563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.366 [2024-12-09 05:24:27.917576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.366 [2024-12-09 05:24:27.925560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.366 [2024-12-09 05:24:27.925572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.366 [2024-12-09 05:24:27.933559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.366 [2024-12-09 05:24:27.933569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.366 [2024-12-09 05:24:27.941559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.366 [2024-12-09 05:24:27.941570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.366 [2024-12-09 05:24:27.949560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.366 [2024-12-09 05:24:27.949571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.366 [2024-12-09 05:24:27.957560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.366 [2024-12-09 05:24:27.957571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.366 [2024-12-09 05:24:27.965557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.366 [2024-12-09 05:24:27.965567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.366 [2024-12-09 05:24:27.973558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.366 [2024-12-09 05:24:27.973568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.366 [2024-12-09 05:24:27.981564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.366 [2024-12-09 05:24:27.981576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.366 [2024-12-09 05:24:27.989558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.366 [2024-12-09 05:24:27.989568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.366 [2024-12-09 05:24:27.997557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.366 [2024-12-09 05:24:27.997567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.366 [2024-12-09 05:24:28.005558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.366 [2024-12-09 05:24:28.005568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.625 [2024-12-09 05:24:28.013560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.625 [2024-12-09 05:24:28.013570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.625 [2024-12-09 05:24:28.021556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.625 [2024-12-09 05:24:28.021566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.625 [2024-12-09 05:24:28.029557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.625 [2024-12-09 05:24:28.029567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.625 [2024-12-09 05:24:28.037557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.625 [2024-12-09 05:24:28.037567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3796275) - No such process 00:29:51.625 05:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3796275 00:29:51.625 05:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:51.625 05:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.625 05:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:51.625 05:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.625 05:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:51.625 05:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.625 05:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:51.625 delay0 00:29:51.625 05:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.625 05:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:29:51.625 05:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.625 05:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:51.625 05:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.625 05:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:29:51.625 [2024-12-09 05:24:28.126090] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:58.189 Initializing NVMe Controllers 00:29:58.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:58.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:58.189 Initialization complete. Launching workers. 00:29:58.189 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3450 00:29:58.189 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3721, failed to submit 49 00:29:58.189 success 3585, unsuccessful 136, failed 0 00:29:58.189 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:29:58.189 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:29:58.189 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:58.189 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:29:58.189 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:58.189 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:29:58.189 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:58.189 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:58.447 rmmod nvme_tcp 00:29:58.447 rmmod nvme_fabrics 00:29:58.447 rmmod nvme_keyring 00:29:58.447 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:58.447 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:29:58.447 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:29:58.447 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3794545 ']' 00:29:58.447 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3794545 00:29:58.447 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3794545 ']' 00:29:58.447 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3794545 00:29:58.447 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:29:58.447 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:58.447 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3794545 00:29:58.447 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:58.447 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:58.447 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3794545' 00:29:58.447 killing process with pid 3794545 00:29:58.447 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3794545 00:29:58.447 05:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3794545 00:29:58.705 05:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:58.705 05:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:58.705 05:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:58.705 05:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:29:58.705 05:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:29:58.705 05:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:58.705 05:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:29:58.705 05:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:58.705 05:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:58.705 05:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.705 05:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.705 05:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.609 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:00.609 00:30:00.609 real 0m31.323s 00:30:00.609 user 0m41.047s 00:30:00.609 sys 0m11.794s 00:30:00.609 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:00.609 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:00.609 ************************************ 00:30:00.609 END TEST nvmf_zcopy 00:30:00.609 ************************************ 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:00.869 ************************************ 00:30:00.869 START TEST nvmf_nmic 00:30:00.869 ************************************ 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:00.869 * Looking for test storage... 00:30:00.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:30:00.869 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:00.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.870 --rc genhtml_branch_coverage=1 00:30:00.870 --rc genhtml_function_coverage=1 00:30:00.870 --rc genhtml_legend=1 00:30:00.870 --rc geninfo_all_blocks=1 00:30:00.870 --rc geninfo_unexecuted_blocks=1 00:30:00.870 00:30:00.870 ' 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:00.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.870 --rc genhtml_branch_coverage=1 00:30:00.870 --rc genhtml_function_coverage=1 00:30:00.870 --rc genhtml_legend=1 00:30:00.870 --rc geninfo_all_blocks=1 00:30:00.870 --rc geninfo_unexecuted_blocks=1 00:30:00.870 00:30:00.870 ' 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:00.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.870 --rc genhtml_branch_coverage=1 00:30:00.870 --rc genhtml_function_coverage=1 00:30:00.870 --rc genhtml_legend=1 00:30:00.870 --rc geninfo_all_blocks=1 00:30:00.870 --rc geninfo_unexecuted_blocks=1 00:30:00.870 00:30:00.870 ' 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:00.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.870 --rc genhtml_branch_coverage=1 00:30:00.870 --rc genhtml_function_coverage=1 00:30:00.870 --rc genhtml_legend=1 00:30:00.870 --rc geninfo_all_blocks=1 00:30:00.870 --rc geninfo_unexecuted_blocks=1 00:30:00.870 00:30:00.870 ' 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.870 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.130 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:01.130 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:01.130 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:30:01.130 05:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:07.697 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:07.697 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:07.697 Found net devices under 0000:86:00.0: cvl_0_0 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:07.697 Found net devices under 0000:86:00.1: cvl_0_1 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:07.697 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:07.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:07.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:30:07.698 00:30:07.698 --- 10.0.0.2 ping statistics --- 00:30:07.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.698 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:07.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:07.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:30:07.698 00:30:07.698 --- 10.0.0.1 ping statistics --- 00:30:07.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.698 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3801842 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3801842 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3801842 ']' 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:07.698 [2024-12-09 05:24:43.445277] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:07.698 [2024-12-09 05:24:43.446195] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:30:07.698 [2024-12-09 05:24:43.446228] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:07.698 [2024-12-09 05:24:43.512137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:07.698 [2024-12-09 05:24:43.557603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:07.698 [2024-12-09 05:24:43.557641] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:07.698 [2024-12-09 05:24:43.557649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:07.698 [2024-12-09 05:24:43.557656] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:07.698 [2024-12-09 05:24:43.557661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:07.698 [2024-12-09 05:24:43.561018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.698 [2024-12-09 05:24:43.561040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:07.698 [2024-12-09 05:24:43.561062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:07.698 [2024-12-09 05:24:43.561064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.698 [2024-12-09 05:24:43.630237] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:07.698 [2024-12-09 05:24:43.630353] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:07.698 [2024-12-09 05:24:43.630617] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:07.698 [2024-12-09 05:24:43.630902] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:07.698 [2024-12-09 05:24:43.631090] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:07.698 [2024-12-09 05:24:43.697597] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:07.698 Malloc0 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:07.698 [2024-12-09 05:24:43.769783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:07.698 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:30:07.699 test case1: single bdev can't be used in multiple subsystems 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:07.699 [2024-12-09 05:24:43.801542] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:30:07.699 [2024-12-09 05:24:43.801562] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:30:07.699 [2024-12-09 05:24:43.801575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.699 request: 00:30:07.699 { 00:30:07.699 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:30:07.699 "namespace": { 00:30:07.699 "bdev_name": "Malloc0", 00:30:07.699 "no_auto_visible": false, 00:30:07.699 "hide_metadata": false 00:30:07.699 }, 00:30:07.699 "method": "nvmf_subsystem_add_ns", 00:30:07.699 "req_id": 1 00:30:07.699 } 00:30:07.699 Got JSON-RPC error response 00:30:07.699 response: 00:30:07.699 { 00:30:07.699 "code": -32602, 00:30:07.699 "message": "Invalid parameters" 00:30:07.699 } 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:30:07.699 Adding namespace failed - expected result. 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:30:07.699 test case2: host connect to nvmf target in multiple paths 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:07.699 [2024-12-09 05:24:43.813639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.699 05:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:07.699 05:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:30:07.699 05:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:30:07.699 05:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:30:07.699 05:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:07.699 05:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:30:07.699 05:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:30:10.231 05:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:10.231 05:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:10.231 05:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:10.231 05:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:30:10.231 05:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:10.231 05:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:30:10.231 05:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:10.231 [global] 00:30:10.231 thread=1 00:30:10.231 invalidate=1 00:30:10.231 rw=write 00:30:10.231 time_based=1 00:30:10.231 runtime=1 00:30:10.231 ioengine=libaio 00:30:10.231 direct=1 00:30:10.231 bs=4096 00:30:10.231 iodepth=1 00:30:10.231 norandommap=0 00:30:10.231 numjobs=1 00:30:10.231 00:30:10.231 verify_dump=1 00:30:10.231 verify_backlog=512 00:30:10.231 verify_state_save=0 00:30:10.231 do_verify=1 00:30:10.231 verify=crc32c-intel 00:30:10.231 [job0] 00:30:10.231 filename=/dev/nvme0n1 00:30:10.231 Could not set queue depth (nvme0n1) 00:30:10.231 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:10.231 fio-3.35 00:30:10.231 Starting 1 thread 00:30:11.166 00:30:11.166 job0: (groupid=0, jobs=1): err= 0: pid=3802459: Mon Dec 9 05:24:47 2024 00:30:11.166 read: IOPS=1493, BW=5973KiB/s (6116kB/s)(6164KiB/1032msec) 00:30:11.166 slat (nsec): min=7073, max=35192, avg=8177.20, stdev=1648.94 00:30:11.166 clat (usec): min=236, max=41330, avg=388.88, stdev=2318.59 00:30:11.166 lat (usec): min=248, max=41340, avg=397.05, stdev=2319.26 00:30:11.166 clat percentiles (usec): 00:30:11.166 | 1.00th=[ 247], 5.00th=[ 249], 10.00th=[ 251], 20.00th=[ 251], 00:30:11.166 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 255], 60.00th=[ 258], 00:30:11.166 | 70.00th=[ 260], 80.00th=[ 262], 90.00th=[ 265], 95.00th=[ 269], 00:30:11.166 | 99.00th=[ 293], 99.50th=[ 326], 99.90th=[41157], 99.95th=[41157], 00:30:11.166 | 99.99th=[41157] 00:30:11.166 write: IOPS=1984, BW=7938KiB/s (8128kB/s)(8192KiB/1032msec); 0 zone resets 00:30:11.167 slat (usec): min=10, max=28934, avg=25.95, stdev=639.11 00:30:11.167 clat (usec): min=139, max=3884, avg=173.83, stdev=88.17 00:30:11.167 lat (usec): min=155, max=29243, avg=199.78, stdev=648.12 00:30:11.167 clat percentiles (usec): 00:30:11.167 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 153], 00:30:11.167 | 30.00th=[ 155], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:30:11.167 | 70.00th=[ 165], 80.00th=[ 198], 90.00th=[ 212], 95.00th=[ 255], 00:30:11.167 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 326], 99.95th=[ 469], 00:30:11.167 | 99.99th=[ 3884] 00:30:11.167 bw ( KiB/s): min= 7624, max= 8760, per=100.00%, avg=8192.00, stdev=803.27, samples=2 00:30:11.167 iops : min= 1906, max= 2190, avg=2048.00, stdev=200.82, samples=2 00:30:11.167 lat (usec) : 250=58.01%, 500=41.82% 00:30:11.167 lat (msec) : 4=0.03%, 50=0.14% 00:30:11.167 cpu : usr=3.39%, sys=5.04%, ctx=3591, majf=0, minf=1 00:30:11.167 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:11.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.167 issued rwts: total=1541,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.167 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:11.167 00:30:11.167 Run status group 0 (all jobs): 00:30:11.167 READ: bw=5973KiB/s (6116kB/s), 5973KiB/s-5973KiB/s (6116kB/s-6116kB/s), io=6164KiB (6312kB), run=1032-1032msec 00:30:11.167 WRITE: bw=7938KiB/s (8128kB/s), 7938KiB/s-7938KiB/s (8128kB/s-8128kB/s), io=8192KiB (8389kB), run=1032-1032msec 00:30:11.167 00:30:11.167 Disk stats (read/write): 00:30:11.167 nvme0n1: ios=1563/2048, merge=0/0, ticks=1410/344, in_queue=1754, util=98.70% 00:30:11.167 05:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:11.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:30:11.426 05:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:11.426 05:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:30:11.426 05:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:11.426 05:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:11.426 05:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:11.426 05:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:11.426 05:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:30:11.426 05:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:11.426 05:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:30:11.426 05:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:11.426 05:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:30:11.426 05:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:11.426 05:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:30:11.426 05:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:11.426 05:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:11.426 rmmod nvme_tcp 00:30:11.426 rmmod nvme_fabrics 00:30:11.426 rmmod nvme_keyring 00:30:11.426 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:11.426 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:30:11.426 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:30:11.426 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3801842 ']' 00:30:11.426 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3801842 00:30:11.426 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3801842 ']' 00:30:11.426 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3801842 00:30:11.426 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:30:11.426 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:11.426 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3801842 00:30:11.685 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:11.685 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:11.685 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3801842' 00:30:11.685 killing process with pid 3801842 00:30:11.685 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3801842 00:30:11.685 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3801842 00:30:11.685 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:11.685 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:11.685 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:11.685 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:30:11.685 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:30:11.685 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:11.685 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:30:11.943 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:11.943 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:11.943 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.943 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.943 05:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.913 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:13.913 00:30:13.913 real 0m13.098s 00:30:13.913 user 0m24.260s 00:30:13.913 sys 0m6.028s 00:30:13.913 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:13.913 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:13.913 ************************************ 00:30:13.913 END TEST nvmf_nmic 00:30:13.913 ************************************ 00:30:13.913 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:13.913 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:13.913 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:13.913 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:13.913 ************************************ 00:30:13.913 START TEST nvmf_fio_target 00:30:13.913 ************************************ 00:30:13.913 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:13.913 * Looking for test storage... 00:30:13.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:13.913 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:13.913 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:30:13.913 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:14.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.172 --rc genhtml_branch_coverage=1 00:30:14.172 --rc genhtml_function_coverage=1 00:30:14.172 --rc genhtml_legend=1 00:30:14.172 --rc geninfo_all_blocks=1 00:30:14.172 --rc geninfo_unexecuted_blocks=1 00:30:14.172 00:30:14.172 ' 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:14.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.172 --rc genhtml_branch_coverage=1 00:30:14.172 --rc genhtml_function_coverage=1 00:30:14.172 --rc genhtml_legend=1 00:30:14.172 --rc geninfo_all_blocks=1 00:30:14.172 --rc geninfo_unexecuted_blocks=1 00:30:14.172 00:30:14.172 ' 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:14.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.172 --rc genhtml_branch_coverage=1 00:30:14.172 --rc genhtml_function_coverage=1 00:30:14.172 --rc genhtml_legend=1 00:30:14.172 --rc geninfo_all_blocks=1 00:30:14.172 --rc geninfo_unexecuted_blocks=1 00:30:14.172 00:30:14.172 ' 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:14.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.172 --rc genhtml_branch_coverage=1 00:30:14.172 --rc genhtml_function_coverage=1 00:30:14.172 --rc genhtml_legend=1 00:30:14.172 --rc geninfo_all_blocks=1 00:30:14.172 --rc geninfo_unexecuted_blocks=1 00:30:14.172 00:30:14.172 ' 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.172 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:14.173 05:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:20.787 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:20.787 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:20.787 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:20.788 Found net devices under 0000:86:00.0: cvl_0_0 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:20.788 Found net devices under 0000:86:00.1: cvl_0_1 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:20.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:20.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:30:20.788 00:30:20.788 --- 10.0.0.2 ping statistics --- 00:30:20.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.788 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:20.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:20.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:30:20.788 00:30:20.788 --- 10.0.0.1 ping statistics --- 00:30:20.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.788 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3806218 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3806218 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3806218 ']' 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:20.788 [2024-12-09 05:24:56.596908] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:20.788 [2024-12-09 05:24:56.597939] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:30:20.788 [2024-12-09 05:24:56.597976] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:20.788 [2024-12-09 05:24:56.668070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:20.788 [2024-12-09 05:24:56.709462] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:20.788 [2024-12-09 05:24:56.709504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:20.788 [2024-12-09 05:24:56.709511] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:20.788 [2024-12-09 05:24:56.709517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:20.788 [2024-12-09 05:24:56.709522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:20.788 [2024-12-09 05:24:56.710952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.788 [2024-12-09 05:24:56.711056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:20.788 [2024-12-09 05:24:56.711082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:20.788 [2024-12-09 05:24:56.711084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.788 [2024-12-09 05:24:56.779669] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:20.788 [2024-12-09 05:24:56.779854] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:20.788 [2024-12-09 05:24:56.779944] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:20.788 [2024-12-09 05:24:56.780066] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:20.788 [2024-12-09 05:24:56.780252] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:20.788 05:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:20.788 [2024-12-09 05:24:57.019812] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:20.789 05:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:20.789 05:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:30:20.789 05:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:21.047 05:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:30:21.047 05:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:21.306 05:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:30:21.306 05:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:21.306 05:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:30:21.306 05:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:30:21.566 05:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:21.823 05:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:30:21.823 05:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:22.082 05:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:30:22.082 05:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:22.339 05:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:30:22.339 05:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:30:22.339 05:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:22.596 05:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:22.596 05:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:22.854 05:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:22.854 05:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:23.112 05:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:23.112 [2024-12-09 05:24:59.671749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:23.112 05:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:30:23.370 05:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:30:23.627 05:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:23.885 05:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:30:23.885 05:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:30:23.885 05:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:23.885 05:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:30:23.885 05:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:30:23.885 05:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:30:25.778 05:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:25.778 05:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:25.778 05:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:25.778 05:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:30:25.778 05:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:25.778 05:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:30:25.778 05:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:25.778 [global] 00:30:25.778 thread=1 00:30:25.778 invalidate=1 00:30:25.778 rw=write 00:30:25.778 time_based=1 00:30:25.778 runtime=1 00:30:25.778 ioengine=libaio 00:30:25.778 direct=1 00:30:25.778 bs=4096 00:30:25.778 iodepth=1 00:30:25.778 norandommap=0 00:30:25.778 numjobs=1 00:30:25.778 00:30:25.778 verify_dump=1 00:30:25.778 verify_backlog=512 00:30:25.778 verify_state_save=0 00:30:25.778 do_verify=1 00:30:25.778 verify=crc32c-intel 00:30:25.778 [job0] 00:30:25.778 filename=/dev/nvme0n1 00:30:25.778 [job1] 00:30:25.778 filename=/dev/nvme0n2 00:30:25.778 [job2] 00:30:25.778 filename=/dev/nvme0n3 00:30:25.778 [job3] 00:30:25.778 filename=/dev/nvme0n4 00:30:26.035 Could not set queue depth (nvme0n1) 00:30:26.035 Could not set queue depth (nvme0n2) 00:30:26.035 Could not set queue depth (nvme0n3) 00:30:26.035 Could not set queue depth (nvme0n4) 00:30:26.294 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:26.294 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:26.294 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:26.294 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:26.294 fio-3.35 00:30:26.294 Starting 4 threads 00:30:27.671 00:30:27.671 job0: (groupid=0, jobs=1): err= 0: pid=3807339: Mon Dec 9 05:25:03 2024 00:30:27.671 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:30:27.671 slat (nsec): min=6887, max=39734, avg=8151.80, stdev=1619.54 00:30:27.671 clat (usec): min=261, max=509, avg=377.50, stdev=69.13 00:30:27.671 lat (usec): min=270, max=517, avg=385.65, stdev=69.19 00:30:27.671 clat percentiles (usec): 00:30:27.671 | 1.00th=[ 269], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 289], 00:30:27.671 | 30.00th=[ 322], 40.00th=[ 363], 50.00th=[ 412], 60.00th=[ 420], 00:30:27.671 | 70.00th=[ 433], 80.00th=[ 441], 90.00th=[ 449], 95.00th=[ 461], 00:30:27.671 | 99.00th=[ 494], 99.50th=[ 502], 99.90th=[ 506], 99.95th=[ 510], 00:30:27.671 | 99.99th=[ 510] 00:30:27.671 write: IOPS=1981, BW=7924KiB/s (8114kB/s)(7932KiB/1001msec); 0 zone resets 00:30:27.671 slat (nsec): min=10086, max=62557, avg=12257.59, stdev=5528.97 00:30:27.671 clat (usec): min=141, max=476, avg=188.07, stdev=32.98 00:30:27.671 lat (usec): min=152, max=488, avg=200.33, stdev=34.71 00:30:27.671 clat percentiles (usec): 00:30:27.671 | 1.00th=[ 151], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 159], 00:30:27.671 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 188], 00:30:27.671 | 70.00th=[ 198], 80.00th=[ 221], 90.00th=[ 239], 95.00th=[ 245], 00:30:27.671 | 99.00th=[ 277], 99.50th=[ 306], 99.90th=[ 412], 99.95th=[ 478], 00:30:27.671 | 99.99th=[ 478] 00:30:27.671 bw ( KiB/s): min= 8192, max= 8192, per=41.53%, avg=8192.00, stdev= 0.00, samples=1 00:30:27.671 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:27.671 lat (usec) : 250=54.45%, 500=45.33%, 750=0.23% 00:30:27.671 cpu : usr=2.90%, sys=5.60%, ctx=3519, majf=0, minf=1 00:30:27.671 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:27.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.671 issued rwts: total=1536,1983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.671 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:27.671 job1: (groupid=0, jobs=1): err= 0: pid=3807340: Mon Dec 9 05:25:03 2024 00:30:27.671 read: IOPS=1690, BW=6761KiB/s (6924kB/s)(6768KiB/1001msec) 00:30:27.671 slat (nsec): min=6368, max=24436, avg=7457.77, stdev=1200.43 00:30:27.671 clat (usec): min=222, max=662, avg=334.89, stdev=69.77 00:30:27.671 lat (usec): min=229, max=670, avg=342.35, stdev=69.88 00:30:27.671 clat percentiles (usec): 00:30:27.671 | 1.00th=[ 249], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 277], 00:30:27.671 | 30.00th=[ 285], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 330], 00:30:27.671 | 70.00th=[ 347], 80.00th=[ 404], 90.00th=[ 449], 95.00th=[ 474], 00:30:27.671 | 99.00th=[ 510], 99.50th=[ 515], 99.90th=[ 644], 99.95th=[ 660], 00:30:27.671 | 99.99th=[ 660] 00:30:27.671 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:30:27.671 slat (nsec): min=9189, max=38256, avg=10403.49, stdev=1323.61 00:30:27.671 clat (usec): min=143, max=456, avg=191.39, stdev=33.02 00:30:27.671 lat (usec): min=153, max=467, avg=201.79, stdev=33.16 00:30:27.671 clat percentiles (usec): 00:30:27.671 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:30:27.671 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 186], 60.00th=[ 194], 00:30:27.671 | 70.00th=[ 206], 80.00th=[ 225], 90.00th=[ 241], 95.00th=[ 249], 00:30:27.671 | 99.00th=[ 273], 99.50th=[ 293], 99.90th=[ 338], 99.95th=[ 338], 00:30:27.671 | 99.99th=[ 457] 00:30:27.671 bw ( KiB/s): min= 8192, max= 8192, per=41.53%, avg=8192.00, stdev= 0.00, samples=1 00:30:27.671 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:27.671 lat (usec) : 250=52.73%, 500=46.50%, 750=0.78% 00:30:27.671 cpu : usr=1.60%, sys=3.70%, ctx=3740, majf=0, minf=1 00:30:27.671 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:27.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.671 issued rwts: total=1692,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.671 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:27.671 job2: (groupid=0, jobs=1): err= 0: pid=3807341: Mon Dec 9 05:25:03 2024 00:30:27.671 read: IOPS=353, BW=1414KiB/s (1448kB/s)(1432KiB/1013msec) 00:30:27.671 slat (nsec): min=7142, max=38071, avg=9815.16, stdev=3988.25 00:30:27.671 clat (usec): min=272, max=41497, avg=2481.17, stdev=9131.32 00:30:27.671 lat (usec): min=282, max=41505, avg=2490.98, stdev=9134.23 00:30:27.671 clat percentiles (usec): 00:30:27.671 | 1.00th=[ 277], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 289], 00:30:27.671 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 330], 00:30:27.671 | 70.00th=[ 347], 80.00th=[ 363], 90.00th=[ 412], 95.00th=[40633], 00:30:27.671 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:30:27.671 | 99.99th=[41681] 00:30:27.671 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:30:27.671 slat (nsec): min=10502, max=64937, avg=17668.50, stdev=11115.88 00:30:27.671 clat (usec): min=146, max=388, avg=211.68, stdev=22.93 00:30:27.671 lat (usec): min=192, max=425, avg=229.35, stdev=22.33 00:30:27.671 clat percentiles (usec): 00:30:27.671 | 1.00th=[ 163], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:30:27.671 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 215], 00:30:27.671 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 241], 95.00th=[ 253], 00:30:27.671 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 388], 99.95th=[ 388], 00:30:27.671 | 99.99th=[ 388] 00:30:27.671 bw ( KiB/s): min= 4096, max= 4096, per=20.76%, avg=4096.00, stdev= 0.00, samples=1 00:30:27.671 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:27.671 lat (usec) : 250=55.06%, 500=42.64%, 750=0.11% 00:30:27.671 lat (msec) : 50=2.18% 00:30:27.671 cpu : usr=0.40%, sys=1.98%, ctx=870, majf=0, minf=1 00:30:27.671 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:27.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.671 issued rwts: total=358,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.671 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:27.671 job3: (groupid=0, jobs=1): err= 0: pid=3807342: Mon Dec 9 05:25:03 2024 00:30:27.671 read: IOPS=494, BW=1979KiB/s (2026kB/s)(2028KiB/1025msec) 00:30:27.671 slat (nsec): min=7125, max=43807, avg=10744.82, stdev=4711.50 00:30:27.671 clat (usec): min=253, max=41139, avg=1755.82, stdev=7533.08 00:30:27.671 lat (usec): min=262, max=41149, avg=1766.56, stdev=7533.96 00:30:27.671 clat percentiles (usec): 00:30:27.671 | 1.00th=[ 273], 5.00th=[ 281], 10.00th=[ 281], 20.00th=[ 285], 00:30:27.671 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 293], 60.00th=[ 297], 00:30:27.671 | 70.00th=[ 306], 80.00th=[ 314], 90.00th=[ 367], 95.00th=[ 420], 00:30:27.671 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:27.671 | 99.99th=[41157] 00:30:27.672 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:30:27.672 slat (nsec): min=10354, max=36463, avg=11705.72, stdev=1968.45 00:30:27.672 clat (usec): min=166, max=458, avg=233.61, stdev=25.71 00:30:27.672 lat (usec): min=178, max=469, avg=245.31, stdev=26.15 00:30:27.672 clat percentiles (usec): 00:30:27.672 | 1.00th=[ 182], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 219], 00:30:27.672 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 237], 00:30:27.672 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 258], 95.00th=[ 273], 00:30:27.672 | 99.00th=[ 306], 99.50th=[ 338], 99.90th=[ 457], 99.95th=[ 457], 00:30:27.672 | 99.99th=[ 457] 00:30:27.672 bw ( KiB/s): min= 4096, max= 4096, per=20.76%, avg=4096.00, stdev= 0.00, samples=1 00:30:27.672 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:27.672 lat (usec) : 250=41.90%, 500=56.13%, 750=0.10% 00:30:27.672 lat (msec) : 10=0.10%, 50=1.77% 00:30:27.672 cpu : usr=0.98%, sys=1.46%, ctx=1019, majf=0, minf=1 00:30:27.672 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:27.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.672 issued rwts: total=507,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.672 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:27.672 00:30:27.672 Run status group 0 (all jobs): 00:30:27.672 READ: bw=15.6MiB/s (16.4MB/s), 1414KiB/s-6761KiB/s (1448kB/s-6924kB/s), io=16.0MiB (16.8MB), run=1001-1025msec 00:30:27.672 WRITE: bw=19.3MiB/s (20.2MB/s), 1998KiB/s-8184KiB/s (2046kB/s-8380kB/s), io=19.7MiB (20.7MB), run=1001-1025msec 00:30:27.672 00:30:27.672 Disk stats (read/write): 00:30:27.672 nvme0n1: ios=1383/1536, merge=0/0, ticks=523/284, in_queue=807, util=86.87% 00:30:27.672 nvme0n2: ios=1584/1559, merge=0/0, ticks=577/298, in_queue=875, util=91.66% 00:30:27.672 nvme0n3: ios=354/512, merge=0/0, ticks=725/102, in_queue=827, util=88.94% 00:30:27.672 nvme0n4: ios=502/512, merge=0/0, ticks=679/112, in_queue=791, util=89.68% 00:30:27.672 05:25:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:30:27.672 [global] 00:30:27.672 thread=1 00:30:27.672 invalidate=1 00:30:27.672 rw=randwrite 00:30:27.672 time_based=1 00:30:27.672 runtime=1 00:30:27.672 ioengine=libaio 00:30:27.672 direct=1 00:30:27.672 bs=4096 00:30:27.672 iodepth=1 00:30:27.672 norandommap=0 00:30:27.672 numjobs=1 00:30:27.672 00:30:27.672 verify_dump=1 00:30:27.672 verify_backlog=512 00:30:27.672 verify_state_save=0 00:30:27.672 do_verify=1 00:30:27.672 verify=crc32c-intel 00:30:27.672 [job0] 00:30:27.672 filename=/dev/nvme0n1 00:30:27.672 [job1] 00:30:27.672 filename=/dev/nvme0n2 00:30:27.672 [job2] 00:30:27.672 filename=/dev/nvme0n3 00:30:27.672 [job3] 00:30:27.672 filename=/dev/nvme0n4 00:30:27.672 Could not set queue depth (nvme0n1) 00:30:27.672 Could not set queue depth (nvme0n2) 00:30:27.672 Could not set queue depth (nvme0n3) 00:30:27.672 Could not set queue depth (nvme0n4) 00:30:27.672 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:27.672 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:27.672 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:27.672 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:27.672 fio-3.35 00:30:27.672 Starting 4 threads 00:30:29.048 00:30:29.048 job0: (groupid=0, jobs=1): err= 0: pid=3807716: Mon Dec 9 05:25:05 2024 00:30:29.048 read: IOPS=524, BW=2097KiB/s (2148kB/s)(2112KiB/1007msec) 00:30:29.048 slat (nsec): min=6528, max=23797, avg=7440.98, stdev=1597.04 00:30:29.048 clat (usec): min=233, max=42115, avg=1510.62, stdev=7032.25 00:30:29.048 lat (usec): min=240, max=42124, avg=1518.07, stdev=7033.18 00:30:29.048 clat percentiles (usec): 00:30:29.048 | 1.00th=[ 243], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 260], 00:30:29.048 | 30.00th=[ 265], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:30:29.048 | 70.00th=[ 277], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 293], 00:30:29.048 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:29.048 | 99.99th=[42206] 00:30:29.048 write: IOPS=1016, BW=4068KiB/s (4165kB/s)(4096KiB/1007msec); 0 zone resets 00:30:29.048 slat (nsec): min=9204, max=62513, avg=10570.44, stdev=2345.91 00:30:29.048 clat (usec): min=147, max=409, avg=186.54, stdev=19.78 00:30:29.048 lat (usec): min=157, max=471, avg=197.11, stdev=20.60 00:30:29.048 clat percentiles (usec): 00:30:29.048 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:30:29.048 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 190], 00:30:29.048 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 217], 00:30:29.048 | 99.00th=[ 239], 99.50th=[ 262], 99.90th=[ 355], 99.95th=[ 408], 00:30:29.048 | 99.99th=[ 408] 00:30:29.048 bw ( KiB/s): min= 8192, max= 8192, per=47.85%, avg=8192.00, stdev= 0.00, samples=1 00:30:29.048 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:29.048 lat (usec) : 250=66.82%, 500=32.15% 00:30:29.048 lat (msec) : 50=1.03% 00:30:29.048 cpu : usr=1.49%, sys=0.70%, ctx=1553, majf=0, minf=1 00:30:29.048 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:29.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.048 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.048 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:29.048 job1: (groupid=0, jobs=1): err= 0: pid=3807717: Mon Dec 9 05:25:05 2024 00:30:29.048 read: IOPS=21, BW=86.9KiB/s (89.0kB/s)(88.0KiB/1013msec) 00:30:29.048 slat (nsec): min=11877, max=33281, avg=23163.59, stdev=3440.32 00:30:29.048 clat (usec): min=40584, max=42117, avg=41133.24, stdev=458.36 00:30:29.048 lat (usec): min=40596, max=42138, avg=41156.40, stdev=458.84 00:30:29.048 clat percentiles (usec): 00:30:29.048 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:30:29.048 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:29.048 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:30:29.048 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:29.048 | 99.99th=[42206] 00:30:29.048 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:30:29.048 slat (nsec): min=9852, max=42833, avg=11138.81, stdev=2114.41 00:30:29.048 clat (usec): min=151, max=397, avg=195.35, stdev=19.59 00:30:29.048 lat (usec): min=161, max=424, avg=206.49, stdev=20.01 00:30:29.048 clat percentiles (usec): 00:30:29.048 | 1.00th=[ 159], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 182], 00:30:29.048 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 198], 00:30:29.048 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 212], 95.00th=[ 221], 00:30:29.048 | 99.00th=[ 241], 99.50th=[ 277], 99.90th=[ 396], 99.95th=[ 396], 00:30:29.048 | 99.99th=[ 396] 00:30:29.048 bw ( KiB/s): min= 4096, max= 4096, per=23.93%, avg=4096.00, stdev= 0.00, samples=1 00:30:29.048 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:29.048 lat (usec) : 250=95.13%, 500=0.75% 00:30:29.048 lat (msec) : 50=4.12% 00:30:29.048 cpu : usr=0.00%, sys=1.38%, ctx=534, majf=0, minf=1 00:30:29.048 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:29.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.048 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.048 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:29.048 job2: (groupid=0, jobs=1): err= 0: pid=3807718: Mon Dec 9 05:25:05 2024 00:30:29.048 read: IOPS=40, BW=160KiB/s (164kB/s)(164KiB/1022msec) 00:30:29.048 slat (nsec): min=8136, max=23926, avg=16306.80, stdev=7169.48 00:30:29.048 clat (usec): min=241, max=42037, avg=22305.74, stdev=20711.21 00:30:29.048 lat (usec): min=249, max=42060, avg=22322.04, stdev=20717.77 00:30:29.048 clat percentiles (usec): 00:30:29.048 | 1.00th=[ 241], 5.00th=[ 253], 10.00th=[ 289], 20.00th=[ 297], 00:30:29.048 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[41157], 60.00th=[41157], 00:30:29.048 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:30:29.048 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:29.048 | 99.99th=[42206] 00:30:29.048 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:30:29.048 slat (nsec): min=9194, max=38156, avg=10426.95, stdev=1616.66 00:30:29.048 clat (usec): min=173, max=457, avg=195.67, stdev=16.67 00:30:29.048 lat (usec): min=184, max=467, avg=206.10, stdev=17.13 00:30:29.048 clat percentiles (usec): 00:30:29.048 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 188], 00:30:29.048 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 196], 00:30:29.048 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 212], 00:30:29.048 | 99.00th=[ 233], 99.50th=[ 251], 99.90th=[ 457], 99.95th=[ 457], 00:30:29.048 | 99.99th=[ 457] 00:30:29.049 bw ( KiB/s): min= 4096, max= 4096, per=23.93%, avg=4096.00, stdev= 0.00, samples=1 00:30:29.049 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:29.049 lat (usec) : 250=92.22%, 500=3.80% 00:30:29.049 lat (msec) : 50=3.98% 00:30:29.049 cpu : usr=0.49%, sys=0.29%, ctx=553, majf=0, minf=2 00:30:29.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:29.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.049 issued rwts: total=41,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:29.049 job3: (groupid=0, jobs=1): err= 0: pid=3807719: Mon Dec 9 05:25:05 2024 00:30:29.049 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:30:29.049 slat (nsec): min=7253, max=37736, avg=8495.20, stdev=1295.32 00:30:29.049 clat (usec): min=213, max=543, avg=261.87, stdev=41.11 00:30:29.049 lat (usec): min=224, max=552, avg=270.37, stdev=41.22 00:30:29.049 clat percentiles (usec): 00:30:29.049 | 1.00th=[ 227], 5.00th=[ 237], 10.00th=[ 239], 20.00th=[ 243], 00:30:29.049 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:30:29.049 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 318], 00:30:29.049 | 99.00th=[ 461], 99.50th=[ 469], 99.90th=[ 519], 99.95th=[ 523], 00:30:29.049 | 99.99th=[ 545] 00:30:29.049 write: IOPS=2323, BW=9295KiB/s (9518kB/s)(9304KiB/1001msec); 0 zone resets 00:30:29.049 slat (nsec): min=10804, max=65138, avg=12336.70, stdev=2133.90 00:30:29.049 clat (usec): min=139, max=326, avg=173.03, stdev=19.32 00:30:29.049 lat (usec): min=155, max=363, avg=185.37, stdev=19.74 00:30:29.049 clat percentiles (usec): 00:30:29.049 | 1.00th=[ 151], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 157], 00:30:29.049 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 172], 00:30:29.049 | 70.00th=[ 184], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 208], 00:30:29.049 | 99.00th=[ 223], 99.50th=[ 229], 99.90th=[ 297], 99.95th=[ 310], 00:30:29.049 | 99.99th=[ 326] 00:30:29.049 bw ( KiB/s): min= 8192, max= 8192, per=47.85%, avg=8192.00, stdev= 0.00, samples=1 00:30:29.049 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:29.049 lat (usec) : 250=78.94%, 500=20.94%, 750=0.11% 00:30:29.049 cpu : usr=5.20%, sys=5.50%, ctx=4375, majf=0, minf=1 00:30:29.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:29.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.049 issued rwts: total=2048,2326,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:29.049 00:30:29.049 Run status group 0 (all jobs): 00:30:29.049 READ: bw=10.1MiB/s (10.6MB/s), 86.9KiB/s-8184KiB/s (89.0kB/s-8380kB/s), io=10.3MiB (10.8MB), run=1001-1022msec 00:30:29.049 WRITE: bw=16.7MiB/s (17.5MB/s), 2004KiB/s-9295KiB/s (2052kB/s-9518kB/s), io=17.1MiB (17.9MB), run=1001-1022msec 00:30:29.049 00:30:29.049 Disk stats (read/write): 00:30:29.049 nvme0n1: ios=574/1024, merge=0/0, ticks=1054/184, in_queue=1238, util=94.99% 00:30:29.049 nvme0n2: ios=33/512, merge=0/0, ticks=757/97, in_queue=854, util=86.98% 00:30:29.049 nvme0n3: ios=36/512, merge=0/0, ticks=709/97, in_queue=806, util=89.04% 00:30:29.049 nvme0n4: ios=1685/2048, merge=0/0, ticks=1364/335, in_queue=1699, util=97.90% 00:30:29.049 05:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:30:29.049 [global] 00:30:29.049 thread=1 00:30:29.049 invalidate=1 00:30:29.049 rw=write 00:30:29.049 time_based=1 00:30:29.049 runtime=1 00:30:29.049 ioengine=libaio 00:30:29.049 direct=1 00:30:29.049 bs=4096 00:30:29.049 iodepth=128 00:30:29.049 norandommap=0 00:30:29.049 numjobs=1 00:30:29.049 00:30:29.049 verify_dump=1 00:30:29.049 verify_backlog=512 00:30:29.049 verify_state_save=0 00:30:29.049 do_verify=1 00:30:29.049 verify=crc32c-intel 00:30:29.049 [job0] 00:30:29.049 filename=/dev/nvme0n1 00:30:29.049 [job1] 00:30:29.049 filename=/dev/nvme0n2 00:30:29.049 [job2] 00:30:29.049 filename=/dev/nvme0n3 00:30:29.049 [job3] 00:30:29.049 filename=/dev/nvme0n4 00:30:29.049 Could not set queue depth (nvme0n1) 00:30:29.049 Could not set queue depth (nvme0n2) 00:30:29.049 Could not set queue depth (nvme0n3) 00:30:29.049 Could not set queue depth (nvme0n4) 00:30:29.307 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:29.307 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:29.307 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:29.307 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:29.307 fio-3.35 00:30:29.307 Starting 4 threads 00:30:30.714 00:30:30.714 job0: (groupid=0, jobs=1): err= 0: pid=3808084: Mon Dec 9 05:25:07 2024 00:30:30.714 read: IOPS=3939, BW=15.4MiB/s (16.1MB/s)(15.5MiB/1006msec) 00:30:30.714 slat (nsec): min=1762, max=13089k, avg=119294.47, stdev=908032.44 00:30:30.714 clat (usec): min=3207, max=52207, avg=14665.27, stdev=5456.53 00:30:30.714 lat (usec): min=7914, max=52215, avg=14784.57, stdev=5539.42 00:30:30.714 clat percentiles (usec): 00:30:30.714 | 1.00th=[ 8291], 5.00th=[10159], 10.00th=[10945], 20.00th=[11338], 00:30:30.714 | 30.00th=[12125], 40.00th=[12911], 50.00th=[13435], 60.00th=[13698], 00:30:30.714 | 70.00th=[14353], 80.00th=[15926], 90.00th=[20055], 95.00th=[23462], 00:30:30.714 | 99.00th=[41157], 99.50th=[47449], 99.90th=[52167], 99.95th=[52167], 00:30:30.714 | 99.99th=[52167] 00:30:30.714 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:30:30.714 slat (usec): min=2, max=11681, avg=122.94, stdev=779.95 00:30:30.714 clat (usec): min=1434, max=52199, avg=16965.98, stdev=10915.41 00:30:30.714 lat (usec): min=1448, max=52210, avg=17088.92, stdev=10996.68 00:30:30.714 clat percentiles (usec): 00:30:30.714 | 1.00th=[ 6194], 5.00th=[ 7504], 10.00th=[ 8586], 20.00th=[ 9765], 00:30:30.714 | 30.00th=[10421], 40.00th=[10945], 50.00th=[12256], 60.00th=[13566], 00:30:30.714 | 70.00th=[15401], 80.00th=[29492], 90.00th=[36963], 95.00th=[40109], 00:30:30.714 | 99.00th=[42206], 99.50th=[42730], 99.90th=[45351], 99.95th=[52167], 00:30:30.714 | 99.99th=[52167] 00:30:30.714 bw ( KiB/s): min=16384, max=16384, per=22.23%, avg=16384.00, stdev= 0.00, samples=2 00:30:30.714 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:30:30.714 lat (msec) : 2=0.02%, 4=0.09%, 10=13.15%, 20=69.90%, 50=16.75% 00:30:30.714 lat (msec) : 100=0.09% 00:30:30.715 cpu : usr=4.08%, sys=4.38%, ctx=257, majf=0, minf=1 00:30:30.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:30:30.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:30.715 issued rwts: total=3963,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:30.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:30.715 job1: (groupid=0, jobs=1): err= 0: pid=3808085: Mon Dec 9 05:25:07 2024 00:30:30.715 read: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec) 00:30:30.715 slat (nsec): min=1294, max=8986.9k, avg=82147.67, stdev=658716.50 00:30:30.715 clat (usec): min=3279, max=19768, avg=10730.38, stdev=2706.63 00:30:30.715 lat (usec): min=3282, max=19778, avg=10812.53, stdev=2752.28 00:30:30.715 clat percentiles (usec): 00:30:30.715 | 1.00th=[ 5800], 5.00th=[ 7504], 10.00th=[ 8455], 20.00th=[ 8848], 00:30:30.715 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10159], 00:30:30.715 | 70.00th=[11338], 80.00th=[13173], 90.00th=[15270], 95.00th=[16188], 00:30:30.715 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19006], 99.95th=[19268], 00:30:30.715 | 99.99th=[19792] 00:30:30.715 write: IOPS=6530, BW=25.5MiB/s (26.8MB/s)(25.7MiB/1006msec); 0 zone resets 00:30:30.715 slat (usec): min=2, max=8525, avg=70.11, stdev=500.40 00:30:30.715 clat (usec): min=1666, max=18555, avg=9375.69, stdev=2394.03 00:30:30.715 lat (usec): min=1678, max=18559, avg=9445.80, stdev=2406.85 00:30:30.715 clat percentiles (usec): 00:30:30.715 | 1.00th=[ 3884], 5.00th=[ 5604], 10.00th=[ 5997], 20.00th=[ 6456], 00:30:30.715 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10159], 00:30:30.715 | 70.00th=[10290], 80.00th=[10421], 90.00th=[13042], 95.00th=[13435], 00:30:30.715 | 99.00th=[13829], 99.50th=[14353], 99.90th=[18220], 99.95th=[18482], 00:30:30.715 | 99.99th=[18482] 00:30:30.715 bw ( KiB/s): min=25720, max=25824, per=34.96%, avg=25772.00, stdev=73.54, samples=2 00:30:30.715 iops : min= 6430, max= 6456, avg=6443.00, stdev=18.38, samples=2 00:30:30.715 lat (msec) : 2=0.02%, 4=0.72%, 10=54.68%, 20=44.58% 00:30:30.715 cpu : usr=5.07%, sys=6.17%, ctx=499, majf=0, minf=1 00:30:30.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:30:30.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:30.715 issued rwts: total=6144,6570,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:30.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:30.715 job2: (groupid=0, jobs=1): err= 0: pid=3808086: Mon Dec 9 05:25:07 2024 00:30:30.715 read: IOPS=5493, BW=21.5MiB/s (22.5MB/s)(21.5MiB/1004msec) 00:30:30.715 slat (nsec): min=1496, max=10392k, avg=90868.22, stdev=649544.20 00:30:30.715 clat (usec): min=846, max=26578, avg=11651.67, stdev=2560.33 00:30:30.715 lat (usec): min=4810, max=26584, avg=11742.54, stdev=2598.66 00:30:30.715 clat percentiles (usec): 00:30:30.715 | 1.00th=[ 5932], 5.00th=[ 8356], 10.00th=[ 9372], 20.00th=[ 9896], 00:30:30.715 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11207], 60.00th=[11731], 00:30:30.715 | 70.00th=[12256], 80.00th=[13173], 90.00th=[14877], 95.00th=[16712], 00:30:30.715 | 99.00th=[19530], 99.50th=[20841], 99.90th=[26608], 99.95th=[26608], 00:30:30.715 | 99.99th=[26608] 00:30:30.715 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:30:30.715 slat (usec): min=2, max=9536, avg=82.90, stdev=580.98 00:30:30.715 clat (usec): min=1752, max=21386, avg=11149.74, stdev=2085.59 00:30:30.715 lat (usec): min=4491, max=21390, avg=11232.64, stdev=2120.17 00:30:30.715 clat percentiles (usec): 00:30:30.715 | 1.00th=[ 5342], 5.00th=[ 6980], 10.00th=[ 8455], 20.00th=[10159], 00:30:30.715 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:30:30.715 | 70.00th=[11731], 80.00th=[11994], 90.00th=[14091], 95.00th=[15008], 00:30:30.715 | 99.00th=[16057], 99.50th=[16909], 99.90th=[19530], 99.95th=[21365], 00:30:30.715 | 99.99th=[21365] 00:30:30.715 bw ( KiB/s): min=21336, max=23720, per=30.56%, avg=22528.00, stdev=1685.74, samples=2 00:30:30.715 iops : min= 5334, max= 5930, avg=5632.00, stdev=421.44, samples=2 00:30:30.715 lat (usec) : 1000=0.01% 00:30:30.715 lat (msec) : 2=0.01%, 10=19.85%, 20=79.74%, 50=0.39% 00:30:30.715 cpu : usr=3.79%, sys=7.48%, ctx=370, majf=0, minf=1 00:30:30.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:30:30.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:30.715 issued rwts: total=5515,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:30.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:30.715 job3: (groupid=0, jobs=1): err= 0: pid=3808087: Mon Dec 9 05:25:07 2024 00:30:30.715 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:30:30.715 slat (nsec): min=1700, max=23839k, avg=189297.54, stdev=1248168.44 00:30:30.715 clat (usec): min=11147, max=96435, avg=27288.34, stdev=16552.83 00:30:30.715 lat (usec): min=11151, max=99637, avg=27477.64, stdev=16647.76 00:30:30.715 clat percentiles (usec): 00:30:30.715 | 1.00th=[11863], 5.00th=[14353], 10.00th=[15533], 20.00th=[16057], 00:30:30.715 | 30.00th=[16581], 40.00th=[17171], 50.00th=[17957], 60.00th=[21103], 00:30:30.715 | 70.00th=[31065], 80.00th=[42730], 90.00th=[54789], 95.00th=[62653], 00:30:30.715 | 99.00th=[85459], 99.50th=[90702], 99.90th=[95945], 99.95th=[95945], 00:30:30.715 | 99.99th=[95945] 00:30:30.715 write: IOPS=2242, BW=8969KiB/s (9184kB/s)(9032KiB/1007msec); 0 zone resets 00:30:30.715 slat (usec): min=3, max=23361, avg=263.27, stdev=1603.69 00:30:30.715 clat (msec): min=2, max=127, avg=30.45, stdev=23.02 00:30:30.715 lat (msec): min=7, max=127, avg=30.71, stdev=23.21 00:30:30.715 clat percentiles (msec): 00:30:30.715 | 1.00th=[ 12], 5.00th=[ 15], 10.00th=[ 15], 20.00th=[ 16], 00:30:30.715 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 29], 00:30:30.715 | 70.00th=[ 39], 80.00th=[ 43], 90.00th=[ 53], 95.00th=[ 87], 00:30:30.715 | 99.00th=[ 113], 99.50th=[ 117], 99.90th=[ 124], 99.95th=[ 128], 00:30:30.715 | 99.99th=[ 128] 00:30:30.715 bw ( KiB/s): min= 8192, max= 8848, per=11.56%, avg=8520.00, stdev=463.86, samples=2 00:30:30.715 iops : min= 2048, max= 2212, avg=2130.00, stdev=115.97, samples=2 00:30:30.715 lat (msec) : 4=0.02%, 10=0.19%, 20=55.34%, 50=31.12%, 100=11.50% 00:30:30.715 lat (msec) : 250=1.83% 00:30:30.715 cpu : usr=1.99%, sys=4.08%, ctx=164, majf=0, minf=1 00:30:30.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:30:30.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:30.715 issued rwts: total=2048,2258,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:30.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:30.715 00:30:30.715 Run status group 0 (all jobs): 00:30:30.715 READ: bw=68.5MiB/s (71.9MB/s), 8135KiB/s-23.9MiB/s (8330kB/s-25.0MB/s), io=69.0MiB (72.4MB), run=1004-1007msec 00:30:30.715 WRITE: bw=72.0MiB/s (75.5MB/s), 8969KiB/s-25.5MiB/s (9184kB/s-26.8MB/s), io=72.5MiB (76.0MB), run=1004-1007msec 00:30:30.715 00:30:30.715 Disk stats (read/write): 00:30:30.715 nvme0n1: ios=3098/3157, merge=0/0, ticks=45370/58571, in_queue=103941, util=86.47% 00:30:30.715 nvme0n2: ios=5145/5577, merge=0/0, ticks=53378/50930, in_queue=104308, util=87.09% 00:30:30.715 nvme0n3: ios=4608/4827, merge=0/0, ticks=39771/38583, in_queue=78354, util=88.95% 00:30:30.715 nvme0n4: ios=2100/2048, merge=0/0, ticks=18227/17396, in_queue=35623, util=97.79% 00:30:30.715 05:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:30:30.715 [global] 00:30:30.715 thread=1 00:30:30.715 invalidate=1 00:30:30.715 rw=randwrite 00:30:30.715 time_based=1 00:30:30.715 runtime=1 00:30:30.715 ioengine=libaio 00:30:30.715 direct=1 00:30:30.715 bs=4096 00:30:30.715 iodepth=128 00:30:30.715 norandommap=0 00:30:30.715 numjobs=1 00:30:30.715 00:30:30.715 verify_dump=1 00:30:30.715 verify_backlog=512 00:30:30.715 verify_state_save=0 00:30:30.715 do_verify=1 00:30:30.715 verify=crc32c-intel 00:30:30.715 [job0] 00:30:30.715 filename=/dev/nvme0n1 00:30:30.715 [job1] 00:30:30.715 filename=/dev/nvme0n2 00:30:30.715 [job2] 00:30:30.715 filename=/dev/nvme0n3 00:30:30.715 [job3] 00:30:30.715 filename=/dev/nvme0n4 00:30:30.715 Could not set queue depth (nvme0n1) 00:30:30.715 Could not set queue depth (nvme0n2) 00:30:30.715 Could not set queue depth (nvme0n3) 00:30:30.715 Could not set queue depth (nvme0n4) 00:30:30.976 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:30.976 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:30.976 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:30.976 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:30.976 fio-3.35 00:30:30.976 Starting 4 threads 00:30:32.360 00:30:32.360 job0: (groupid=0, jobs=1): err= 0: pid=3808463: Mon Dec 9 05:25:08 2024 00:30:32.360 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:30:32.360 slat (nsec): min=1286, max=10695k, avg=96497.10, stdev=768078.34 00:30:32.360 clat (usec): min=3076, max=22622, avg=12316.09, stdev=2977.18 00:30:32.360 lat (usec): min=3081, max=25370, avg=12412.59, stdev=3041.30 00:30:32.360 clat percentiles (usec): 00:30:32.360 | 1.00th=[ 7767], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10028], 00:30:32.360 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11600], 60.00th=[11994], 00:30:32.360 | 70.00th=[12387], 80.00th=[14353], 90.00th=[16712], 95.00th=[19268], 00:30:32.360 | 99.00th=[21103], 99.50th=[21103], 99.90th=[21890], 99.95th=[22152], 00:30:32.360 | 99.99th=[22676] 00:30:32.360 write: IOPS=5221, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1007msec); 0 zone resets 00:30:32.360 slat (usec): min=2, max=20848, avg=89.67, stdev=749.61 00:30:32.360 clat (usec): min=902, max=63684, avg=11884.52, stdev=6283.38 00:30:32.360 lat (usec): min=917, max=63748, avg=11974.20, stdev=6334.63 00:30:32.360 clat percentiles (usec): 00:30:32.360 | 1.00th=[ 4178], 5.00th=[ 6521], 10.00th=[ 7111], 20.00th=[ 8029], 00:30:32.360 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[11076], 60.00th=[11600], 00:30:32.360 | 70.00th=[11863], 80.00th=[12911], 90.00th=[16057], 95.00th=[20055], 00:30:32.360 | 99.00th=[46924], 99.50th=[53216], 99.90th=[54264], 99.95th=[54264], 00:30:32.360 | 99.99th=[63701] 00:30:32.360 bw ( KiB/s): min=20480, max=20568, per=25.21%, avg=20524.00, stdev=62.23, samples=2 00:30:32.360 iops : min= 5120, max= 5142, avg=5131.00, stdev=15.56, samples=2 00:30:32.360 lat (usec) : 1000=0.03% 00:30:32.360 lat (msec) : 2=0.06%, 4=0.40%, 10=25.36%, 20=70.18%, 50=3.57% 00:30:32.360 lat (msec) : 100=0.41% 00:30:32.360 cpu : usr=3.78%, sys=6.16%, ctx=372, majf=0, minf=1 00:30:32.360 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:30:32.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:32.360 issued rwts: total=5120,5258,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:32.360 job1: (groupid=0, jobs=1): err= 0: pid=3808474: Mon Dec 9 05:25:08 2024 00:30:32.360 read: IOPS=5149, BW=20.1MiB/s (21.1MB/s)(20.3MiB/1010msec) 00:30:32.360 slat (nsec): min=1486, max=11557k, avg=101621.80, stdev=817137.25 00:30:32.360 clat (usec): min=3091, max=28532, avg=12948.64, stdev=3428.45 00:30:32.360 lat (usec): min=4662, max=30337, avg=13050.26, stdev=3488.55 00:30:32.360 clat percentiles (usec): 00:30:32.360 | 1.00th=[ 5866], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[10552], 00:30:32.360 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11863], 60.00th=[12256], 00:30:32.360 | 70.00th=[13304], 80.00th=[16319], 90.00th=[18744], 95.00th=[19530], 00:30:32.360 | 99.00th=[21627], 99.50th=[21890], 99.90th=[22414], 99.95th=[27395], 00:30:32.360 | 99.99th=[28443] 00:30:32.360 write: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec); 0 zone resets 00:30:32.360 slat (usec): min=2, max=10763, avg=79.49, stdev=561.77 00:30:32.360 clat (usec): min=1328, max=22285, avg=10772.53, stdev=2745.13 00:30:32.360 lat (usec): min=1334, max=23136, avg=10852.02, stdev=2771.14 00:30:32.360 clat percentiles (usec): 00:30:32.360 | 1.00th=[ 3326], 5.00th=[ 6456], 10.00th=[ 7439], 20.00th=[ 8717], 00:30:32.360 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10945], 60.00th=[11600], 00:30:32.360 | 70.00th=[11863], 80.00th=[11994], 90.00th=[13435], 95.00th=[15926], 00:30:32.360 | 99.00th=[18482], 99.50th=[19530], 99.90th=[21627], 99.95th=[22152], 00:30:32.360 | 99.99th=[22414] 00:30:32.360 bw ( KiB/s): min=21264, max=23416, per=27.44%, avg=22340.00, stdev=1521.69, samples=2 00:30:32.360 iops : min= 5316, max= 5854, avg=5585.00, stdev=380.42, samples=2 00:30:32.360 lat (msec) : 2=0.16%, 4=0.57%, 10=22.83%, 20=74.53%, 50=1.91% 00:30:32.360 cpu : usr=3.47%, sys=6.14%, ctx=483, majf=0, minf=1 00:30:32.360 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:30:32.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:32.360 issued rwts: total=5201,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:32.360 job2: (groupid=0, jobs=1): err= 0: pid=3808497: Mon Dec 9 05:25:08 2024 00:30:32.360 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:30:32.360 slat (nsec): min=1486, max=12846k, avg=110513.92, stdev=908842.32 00:30:32.360 clat (usec): min=4316, max=25812, avg=13977.62, stdev=3606.60 00:30:32.360 lat (usec): min=4327, max=32345, avg=14088.14, stdev=3687.76 00:30:32.360 clat percentiles (usec): 00:30:32.360 | 1.00th=[ 7767], 5.00th=[ 9372], 10.00th=[10552], 20.00th=[11600], 00:30:32.360 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13042], 60.00th=[13304], 00:30:32.360 | 70.00th=[13960], 80.00th=[16712], 90.00th=[19530], 95.00th=[22152], 00:30:32.360 | 99.00th=[24249], 99.50th=[24773], 99.90th=[25560], 99.95th=[25822], 00:30:32.360 | 99.99th=[25822] 00:30:32.360 write: IOPS=4600, BW=18.0MiB/s (18.8MB/s)(18.1MiB/1009msec); 0 zone resets 00:30:32.360 slat (usec): min=2, max=56073, avg=99.85, stdev=1021.38 00:30:32.360 clat (usec): min=902, max=64877, avg=13633.19, stdev=8694.25 00:30:32.360 lat (usec): min=919, max=64885, avg=13733.05, stdev=8720.65 00:30:32.360 clat percentiles (usec): 00:30:32.360 | 1.00th=[ 6521], 5.00th=[ 7373], 10.00th=[ 8455], 20.00th=[ 9896], 00:30:32.361 | 30.00th=[11076], 40.00th=[12256], 50.00th=[13042], 60.00th=[13435], 00:30:32.361 | 70.00th=[13698], 80.00th=[13829], 90.00th=[14615], 95.00th=[18744], 00:30:32.361 | 99.00th=[64750], 99.50th=[64750], 99.90th=[64750], 99.95th=[64750], 00:30:32.361 | 99.99th=[64750] 00:30:32.361 bw ( KiB/s): min=16384, max=20480, per=22.64%, avg=18432.00, stdev=2896.31, samples=2 00:30:32.361 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:30:32.361 lat (usec) : 1000=0.03% 00:30:32.361 lat (msec) : 2=0.04%, 4=0.08%, 10=14.24%, 20=78.94%, 50=5.30% 00:30:32.361 lat (msec) : 100=1.37% 00:30:32.361 cpu : usr=2.78%, sys=6.05%, ctx=391, majf=0, minf=1 00:30:32.361 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:30:32.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:32.361 issued rwts: total=4608,4642,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:32.361 job3: (groupid=0, jobs=1): err= 0: pid=3808507: Mon Dec 9 05:25:08 2024 00:30:32.361 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:30:32.361 slat (nsec): min=1499, max=12311k, avg=111063.11, stdev=925164.83 00:30:32.361 clat (usec): min=7352, max=25215, avg=14129.66, stdev=3204.09 00:30:32.361 lat (usec): min=7360, max=33898, avg=14240.72, stdev=3320.90 00:30:32.361 clat percentiles (usec): 00:30:32.361 | 1.00th=[ 7439], 5.00th=[10552], 10.00th=[11338], 20.00th=[11994], 00:30:32.361 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13173], 60.00th=[13566], 00:30:32.361 | 70.00th=[13960], 80.00th=[15926], 90.00th=[19006], 95.00th=[21627], 00:30:32.361 | 99.00th=[24249], 99.50th=[24511], 99.90th=[25297], 99.95th=[25297], 00:30:32.361 | 99.99th=[25297] 00:30:32.361 write: IOPS=4990, BW=19.5MiB/s (20.4MB/s)(19.7MiB/1011msec); 0 zone resets 00:30:32.361 slat (usec): min=2, max=11781, avg=90.69, stdev=746.90 00:30:32.361 clat (usec): min=1586, max=24719, avg=12504.25, stdev=3337.82 00:30:32.361 lat (usec): min=1602, max=24734, avg=12594.94, stdev=3384.74 00:30:32.361 clat percentiles (usec): 00:30:32.361 | 1.00th=[ 4080], 5.00th=[ 7767], 10.00th=[ 8586], 20.00th=[10159], 00:30:32.361 | 30.00th=[11076], 40.00th=[11863], 50.00th=[12387], 60.00th=[12780], 00:30:32.361 | 70.00th=[13304], 80.00th=[14091], 90.00th=[17957], 95.00th=[19530], 00:30:32.361 | 99.00th=[20841], 99.50th=[22414], 99.90th=[24249], 99.95th=[24249], 00:30:32.361 | 99.99th=[24773] 00:30:32.361 bw ( KiB/s): min=18864, max=20480, per=24.16%, avg=19672.00, stdev=1142.68, samples=2 00:30:32.361 iops : min= 4716, max= 5120, avg=4918.00, stdev=285.67, samples=2 00:30:32.361 lat (msec) : 2=0.17%, 4=0.34%, 10=10.53%, 20=83.03%, 50=5.94% 00:30:32.361 cpu : usr=3.86%, sys=5.94%, ctx=272, majf=0, minf=2 00:30:32.361 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:30:32.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:32.361 issued rwts: total=4608,5045,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:32.361 00:30:32.361 Run status group 0 (all jobs): 00:30:32.361 READ: bw=75.5MiB/s (79.2MB/s), 17.8MiB/s-20.1MiB/s (18.7MB/s-21.1MB/s), io=76.3MiB (80.0MB), run=1007-1011msec 00:30:32.361 WRITE: bw=79.5MiB/s (83.4MB/s), 18.0MiB/s-21.8MiB/s (18.8MB/s-22.8MB/s), io=80.4MiB (84.3MB), run=1007-1011msec 00:30:32.361 00:30:32.361 Disk stats (read/write): 00:30:32.361 nvme0n1: ios=3979/4096, merge=0/0, ticks=47897/44666, in_queue=92563, util=99.40% 00:30:32.361 nvme0n2: ios=4130/4422, merge=0/0, ticks=52799/45854, in_queue=98653, util=99.59% 00:30:32.361 nvme0n3: ios=3606/4061, merge=0/0, ticks=49772/48163, in_queue=97935, util=99.24% 00:30:32.361 nvme0n4: ios=3599/3991, merge=0/0, ticks=50437/48624, in_queue=99061, util=97.01% 00:30:32.361 05:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:30:32.361 05:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3808689 00:30:32.361 05:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:30:32.361 05:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:30:32.361 [global] 00:30:32.361 thread=1 00:30:32.361 invalidate=1 00:30:32.361 rw=read 00:30:32.361 time_based=1 00:30:32.361 runtime=10 00:30:32.361 ioengine=libaio 00:30:32.361 direct=1 00:30:32.361 bs=4096 00:30:32.361 iodepth=1 00:30:32.361 norandommap=1 00:30:32.361 numjobs=1 00:30:32.361 00:30:32.361 [job0] 00:30:32.361 filename=/dev/nvme0n1 00:30:32.361 [job1] 00:30:32.361 filename=/dev/nvme0n2 00:30:32.361 [job2] 00:30:32.361 filename=/dev/nvme0n3 00:30:32.361 [job3] 00:30:32.361 filename=/dev/nvme0n4 00:30:32.361 Could not set queue depth (nvme0n1) 00:30:32.361 Could not set queue depth (nvme0n2) 00:30:32.361 Could not set queue depth (nvme0n3) 00:30:32.361 Could not set queue depth (nvme0n4) 00:30:32.619 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:32.619 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:32.619 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:32.619 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:32.619 fio-3.35 00:30:32.619 Starting 4 threads 00:30:35.156 05:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:30:35.430 05:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:30:35.430 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:30:35.430 fio: pid=3808944, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:35.687 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=5349376, buflen=4096 00:30:35.687 fio: pid=3808939, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:35.687 05:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:35.687 05:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:30:35.687 05:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:35.687 05:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:30:35.945 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=6725632, buflen=4096 00:30:35.945 fio: pid=3808906, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:35.945 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=39878656, buflen=4096 00:30:35.945 fio: pid=3808922, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:35.945 05:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:35.945 05:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:30:35.945 00:30:35.945 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3808906: Mon Dec 9 05:25:12 2024 00:30:35.945 read: IOPS=516, BW=2067KiB/s (2116kB/s)(6568KiB/3178msec) 00:30:35.945 slat (usec): min=6, max=15872, avg=17.55, stdev=391.41 00:30:35.945 clat (usec): min=222, max=42224, avg=1903.31, stdev=7940.25 00:30:35.945 lat (usec): min=229, max=56991, avg=1920.81, stdev=7999.98 00:30:35.945 clat percentiles (usec): 00:30:35.945 | 1.00th=[ 231], 5.00th=[ 247], 10.00th=[ 262], 20.00th=[ 277], 00:30:35.945 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:30:35.945 | 70.00th=[ 297], 80.00th=[ 302], 90.00th=[ 330], 95.00th=[ 445], 00:30:35.945 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:30:35.945 | 99.99th=[42206] 00:30:35.945 bw ( KiB/s): min= 93, max=12216, per=14.39%, avg=2183.50, stdev=4917.49, samples=6 00:30:35.945 iops : min= 23, max= 3054, avg=545.83, stdev=1229.39, samples=6 00:30:35.945 lat (usec) : 250=7.18%, 500=88.44%, 750=0.37% 00:30:35.945 lat (msec) : 50=3.96% 00:30:35.945 cpu : usr=0.06%, sys=0.57%, ctx=1646, majf=0, minf=1 00:30:35.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:35.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.945 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.945 issued rwts: total=1643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:35.945 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3808922: Mon Dec 9 05:25:12 2024 00:30:35.945 read: IOPS=2895, BW=11.3MiB/s (11.9MB/s)(38.0MiB/3363msec) 00:30:35.945 slat (usec): min=5, max=22012, avg=16.65, stdev=411.79 00:30:35.945 clat (usec): min=210, max=42278, avg=325.14, stdev=1602.39 00:30:35.945 lat (usec): min=217, max=61925, avg=341.80, stdev=1707.08 00:30:35.945 clat percentiles (usec): 00:30:35.945 | 1.00th=[ 245], 5.00th=[ 249], 10.00th=[ 251], 20.00th=[ 253], 00:30:35.945 | 30.00th=[ 255], 40.00th=[ 255], 50.00th=[ 258], 60.00th=[ 260], 00:30:35.945 | 70.00th=[ 262], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 310], 00:30:35.945 | 99.00th=[ 355], 99.50th=[ 408], 99.90th=[41157], 99.95th=[41157], 00:30:35.945 | 99.99th=[42206] 00:30:35.946 bw ( KiB/s): min= 5136, max=15168, per=82.64%, avg=12534.00, stdev=4187.53, samples=6 00:30:35.946 iops : min= 1284, max= 3792, avg=3133.50, stdev=1046.88, samples=6 00:30:35.946 lat (usec) : 250=7.20%, 500=92.58%, 750=0.05% 00:30:35.946 lat (msec) : 50=0.15% 00:30:35.946 cpu : usr=0.86%, sys=2.47%, ctx=9748, majf=0, minf=2 00:30:35.946 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:35.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.946 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.946 issued rwts: total=9737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.946 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:35.946 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3808939: Mon Dec 9 05:25:12 2024 00:30:35.946 read: IOPS=446, BW=1785KiB/s (1828kB/s)(5224KiB/2927msec) 00:30:35.946 slat (usec): min=3, max=15571, avg=31.66, stdev=598.75 00:30:35.946 clat (usec): min=235, max=42561, avg=2191.11, stdev=8600.74 00:30:35.946 lat (usec): min=239, max=42594, avg=2222.79, stdev=8619.46 00:30:35.946 clat percentiles (usec): 00:30:35.946 | 1.00th=[ 243], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 260], 00:30:35.946 | 30.00th=[ 265], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 289], 00:30:35.946 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 371], 95.00th=[ 537], 00:30:35.946 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42730], 00:30:35.946 | 99.99th=[42730] 00:30:35.946 bw ( KiB/s): min= 96, max= 5776, per=8.36%, avg=1268.80, stdev=2520.72, samples=5 00:30:35.946 iops : min= 24, max= 1444, avg=317.20, stdev=630.18, samples=5 00:30:35.946 lat (usec) : 250=6.89%, 500=87.83%, 750=0.38% 00:30:35.946 lat (msec) : 2=0.15%, 50=4.67% 00:30:35.946 cpu : usr=0.10%, sys=0.68%, ctx=1309, majf=0, minf=2 00:30:35.946 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:35.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.946 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.946 issued rwts: total=1307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.946 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:35.946 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3808944: Mon Dec 9 05:25:12 2024 00:30:35.946 read: IOPS=24, BW=98.1KiB/s (100kB/s)(268KiB/2731msec) 00:30:35.946 slat (nsec): min=10684, max=26038, avg=24705.91, stdev=1811.47 00:30:35.946 clat (usec): min=397, max=42223, avg=40411.02, stdev=4968.03 00:30:35.946 lat (usec): min=423, max=42234, avg=40435.72, stdev=4967.82 00:30:35.946 clat percentiles (usec): 00:30:35.946 | 1.00th=[ 396], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:30:35.946 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:35.946 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:35.946 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:35.946 | 99.99th=[42206] 00:30:35.946 bw ( KiB/s): min= 96, max= 104, per=0.64%, avg=97.60, stdev= 3.58, samples=5 00:30:35.946 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:30:35.946 lat (usec) : 500=1.47% 00:30:35.946 lat (msec) : 50=97.06% 00:30:35.946 cpu : usr=0.00%, sys=0.11%, ctx=68, majf=0, minf=2 00:30:35.946 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:35.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.946 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.946 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.946 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:35.946 00:30:35.946 Run status group 0 (all jobs): 00:30:35.946 READ: bw=14.8MiB/s (15.5MB/s), 98.1KiB/s-11.3MiB/s (100kB/s-11.9MB/s), io=49.8MiB (52.2MB), run=2731-3363msec 00:30:35.946 00:30:35.946 Disk stats (read/write): 00:30:35.946 nvme0n1: ios=1640/0, merge=0/0, ticks=3027/0, in_queue=3027, util=95.25% 00:30:35.946 nvme0n2: ios=9769/0, merge=0/0, ticks=4022/0, in_queue=4022, util=97.25% 00:30:35.946 nvme0n3: ios=1136/0, merge=0/0, ticks=2813/0, in_queue=2813, util=95.64% 00:30:35.946 nvme0n4: ios=64/0, merge=0/0, ticks=2586/0, in_queue=2586, util=96.48% 00:30:36.204 05:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:36.204 05:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:30:36.461 05:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:36.461 05:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:30:36.719 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:36.719 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:30:36.978 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:36.978 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:30:36.978 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:30:36.978 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3808689 00:30:36.978 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:30:36.978 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:37.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:37.237 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:37.237 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:30:37.237 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:37.237 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:37.237 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:37.237 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:37.237 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:30:37.237 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:30:37.237 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:30:37.237 nvmf hotplug test: fio failed as expected 00:30:37.237 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:37.499 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:30:37.499 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:30:37.499 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:30:37.499 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:30:37.499 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:30:37.499 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:37.499 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:30:37.499 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:37.499 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:30:37.499 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:37.499 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:37.499 rmmod nvme_tcp 00:30:37.499 rmmod nvme_fabrics 00:30:37.499 rmmod nvme_keyring 00:30:37.499 05:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:37.499 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:30:37.499 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:30:37.499 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3806218 ']' 00:30:37.499 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3806218 00:30:37.499 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3806218 ']' 00:30:37.499 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3806218 00:30:37.499 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:30:37.499 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:37.499 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3806218 00:30:37.499 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:37.499 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:37.499 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3806218' 00:30:37.499 killing process with pid 3806218 00:30:37.499 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3806218 00:30:37.499 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3806218 00:30:37.758 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:37.759 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:37.759 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:37.759 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:30:37.759 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:30:37.759 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:30:37.759 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:37.759 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:37.759 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:37.759 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.759 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.759 05:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.301 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:40.301 00:30:40.301 real 0m25.860s 00:30:40.301 user 1m30.505s 00:30:40.301 sys 0m11.022s 00:30:40.301 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:40.301 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:40.301 ************************************ 00:30:40.301 END TEST nvmf_fio_target 00:30:40.301 ************************************ 00:30:40.301 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:40.301 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:40.301 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:40.301 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:40.301 ************************************ 00:30:40.301 START TEST nvmf_bdevio 00:30:40.301 ************************************ 00:30:40.301 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:40.301 * Looking for test storage... 00:30:40.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:40.301 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:40.301 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:30:40.301 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:40.301 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:40.301 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:40.301 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:40.301 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:40.301 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:30:40.301 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:30:40.301 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:30:40.301 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:40.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.302 --rc genhtml_branch_coverage=1 00:30:40.302 --rc genhtml_function_coverage=1 00:30:40.302 --rc genhtml_legend=1 00:30:40.302 --rc geninfo_all_blocks=1 00:30:40.302 --rc geninfo_unexecuted_blocks=1 00:30:40.302 00:30:40.302 ' 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:40.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.302 --rc genhtml_branch_coverage=1 00:30:40.302 --rc genhtml_function_coverage=1 00:30:40.302 --rc genhtml_legend=1 00:30:40.302 --rc geninfo_all_blocks=1 00:30:40.302 --rc geninfo_unexecuted_blocks=1 00:30:40.302 00:30:40.302 ' 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:40.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.302 --rc genhtml_branch_coverage=1 00:30:40.302 --rc genhtml_function_coverage=1 00:30:40.302 --rc genhtml_legend=1 00:30:40.302 --rc geninfo_all_blocks=1 00:30:40.302 --rc geninfo_unexecuted_blocks=1 00:30:40.302 00:30:40.302 ' 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:40.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.302 --rc genhtml_branch_coverage=1 00:30:40.302 --rc genhtml_function_coverage=1 00:30:40.302 --rc genhtml_legend=1 00:30:40.302 --rc geninfo_all_blocks=1 00:30:40.302 --rc geninfo_unexecuted_blocks=1 00:30:40.302 00:30:40.302 ' 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:40.302 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:40.303 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:40.303 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:40.303 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.303 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.303 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.303 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:40.303 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:40.303 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:30:40.303 05:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:45.574 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.574 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:30:45.574 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:45.574 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:45.574 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:45.574 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:45.575 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:45.575 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:45.575 Found net devices under 0000:86:00.0: cvl_0_0 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:45.575 Found net devices under 0000:86:00.1: cvl_0_1 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.575 05:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:45.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:30:45.575 00:30:45.575 --- 10.0.0.2 ping statistics --- 00:30:45.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.575 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:30:45.575 00:30:45.575 --- 10.0.0.1 ping statistics --- 00:30:45.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.575 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3813282 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3813282 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3813282 ']' 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.575 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:45.575 [2024-12-09 05:25:22.208542] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:45.575 [2024-12-09 05:25:22.209540] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:30:45.575 [2024-12-09 05:25:22.209579] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.834 [2024-12-09 05:25:22.278698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:45.834 [2024-12-09 05:25:22.319723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.834 [2024-12-09 05:25:22.319769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.834 [2024-12-09 05:25:22.319777] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.834 [2024-12-09 05:25:22.319782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.834 [2024-12-09 05:25:22.319787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.834 [2024-12-09 05:25:22.321427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:45.834 [2024-12-09 05:25:22.321537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:45.834 [2024-12-09 05:25:22.321644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:45.834 [2024-12-09 05:25:22.321645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:45.834 [2024-12-09 05:25:22.389674] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:45.834 [2024-12-09 05:25:22.390218] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:45.834 [2024-12-09 05:25:22.390620] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:45.834 [2024-12-09 05:25:22.390861] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:45.834 [2024-12-09 05:25:22.390919] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:45.834 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:45.834 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:30:45.834 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:45.834 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:45.834 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:45.834 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.834 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:45.834 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.834 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:45.834 [2024-12-09 05:25:22.466234] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.093 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.093 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:46.093 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.093 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:46.093 Malloc0 00:30:46.093 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.093 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:46.093 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.093 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:46.093 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.093 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:46.093 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.093 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:46.093 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.094 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:46.094 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.094 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:46.094 [2024-12-09 05:25:22.534361] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.094 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.094 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:30:46.094 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:30:46.094 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:30:46.094 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:30:46.094 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:46.094 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:46.094 { 00:30:46.094 "params": { 00:30:46.094 "name": "Nvme$subsystem", 00:30:46.094 "trtype": "$TEST_TRANSPORT", 00:30:46.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:46.094 "adrfam": "ipv4", 00:30:46.094 "trsvcid": "$NVMF_PORT", 00:30:46.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:46.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:46.094 "hdgst": ${hdgst:-false}, 00:30:46.094 "ddgst": ${ddgst:-false} 00:30:46.094 }, 00:30:46.094 "method": "bdev_nvme_attach_controller" 00:30:46.094 } 00:30:46.094 EOF 00:30:46.094 )") 00:30:46.094 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:30:46.094 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:30:46.094 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:30:46.094 05:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:46.094 "params": { 00:30:46.094 "name": "Nvme1", 00:30:46.094 "trtype": "tcp", 00:30:46.094 "traddr": "10.0.0.2", 00:30:46.094 "adrfam": "ipv4", 00:30:46.094 "trsvcid": "4420", 00:30:46.094 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:46.094 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:46.094 "hdgst": false, 00:30:46.094 "ddgst": false 00:30:46.094 }, 00:30:46.094 "method": "bdev_nvme_attach_controller" 00:30:46.094 }' 00:30:46.094 [2024-12-09 05:25:22.586603] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:30:46.094 [2024-12-09 05:25:22.586647] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3813311 ] 00:30:46.094 [2024-12-09 05:25:22.653034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:46.094 [2024-12-09 05:25:22.697061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.094 [2024-12-09 05:25:22.697156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.094 [2024-12-09 05:25:22.697156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:46.352 I/O targets: 00:30:46.353 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:30:46.353 00:30:46.353 00:30:46.353 CUnit - A unit testing framework for C - Version 2.1-3 00:30:46.353 http://cunit.sourceforge.net/ 00:30:46.353 00:30:46.353 00:30:46.353 Suite: bdevio tests on: Nvme1n1 00:30:46.353 Test: blockdev write read block ...passed 00:30:46.353 Test: blockdev write zeroes read block ...passed 00:30:46.353 Test: blockdev write zeroes read no split ...passed 00:30:46.353 Test: blockdev write zeroes read split ...passed 00:30:46.353 Test: blockdev write zeroes read split partial ...passed 00:30:46.353 Test: blockdev reset ...[2024-12-09 05:25:22.994532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:46.353 [2024-12-09 05:25:22.994600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1844350 (9): Bad file descriptor 00:30:46.611 [2024-12-09 05:25:23.127961] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:30:46.611 passed 00:30:46.611 Test: blockdev write read 8 blocks ...passed 00:30:46.611 Test: blockdev write read size > 128k ...passed 00:30:46.611 Test: blockdev write read invalid size ...passed 00:30:46.611 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:46.611 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:46.611 Test: blockdev write read max offset ...passed 00:30:46.611 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:46.611 Test: blockdev writev readv 8 blocks ...passed 00:30:46.611 Test: blockdev writev readv 30 x 1block ...passed 00:30:46.871 Test: blockdev writev readv block ...passed 00:30:46.871 Test: blockdev writev readv size > 128k ...passed 00:30:46.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:46.871 Test: blockdev comparev and writev ...[2024-12-09 05:25:23.298020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:46.871 [2024-12-09 05:25:23.298050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:46.871 [2024-12-09 05:25:23.298064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:46.871 [2024-12-09 05:25:23.298072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:46.871 [2024-12-09 05:25:23.298389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:46.871 [2024-12-09 05:25:23.298400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:46.871 [2024-12-09 05:25:23.298412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:46.871 [2024-12-09 05:25:23.298419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:46.871 [2024-12-09 05:25:23.298730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:46.871 [2024-12-09 05:25:23.298742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:46.871 [2024-12-09 05:25:23.298754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:46.871 [2024-12-09 05:25:23.298762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:46.871 [2024-12-09 05:25:23.299078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:46.871 [2024-12-09 05:25:23.299091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:46.871 [2024-12-09 05:25:23.299103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:46.871 [2024-12-09 05:25:23.299111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:46.871 passed 00:30:46.871 Test: blockdev nvme passthru rw ...passed 00:30:46.871 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:25:23.381330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:46.871 [2024-12-09 05:25:23.381349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:46.871 [2024-12-09 05:25:23.381478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:46.871 [2024-12-09 05:25:23.381493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:46.872 [2024-12-09 05:25:23.381620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:46.872 [2024-12-09 05:25:23.381630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:46.872 [2024-12-09 05:25:23.381759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:46.872 [2024-12-09 05:25:23.381769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:46.872 passed 00:30:46.872 Test: blockdev nvme admin passthru ...passed 00:30:46.872 Test: blockdev copy ...passed 00:30:46.872 00:30:46.872 Run Summary: Type Total Ran Passed Failed Inactive 00:30:46.872 suites 1 1 n/a 0 0 00:30:46.872 tests 23 23 23 0 0 00:30:46.872 asserts 152 152 152 0 n/a 00:30:46.872 00:30:46.872 Elapsed time = 1.098 seconds 00:30:47.131 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:47.131 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.131 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:47.131 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.131 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:30:47.131 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:30:47.131 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:47.131 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:30:47.131 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:47.131 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:30:47.131 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:47.131 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:47.131 rmmod nvme_tcp 00:30:47.131 rmmod nvme_fabrics 00:30:47.131 rmmod nvme_keyring 00:30:47.131 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:47.131 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:30:47.131 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:30:47.131 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3813282 ']' 00:30:47.131 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3813282 00:30:47.131 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3813282 ']' 00:30:47.131 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3813282 00:30:47.131 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:30:47.132 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:47.132 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3813282 00:30:47.132 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:30:47.132 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:30:47.132 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3813282' 00:30:47.132 killing process with pid 3813282 00:30:47.132 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3813282 00:30:47.132 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3813282 00:30:47.391 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:47.391 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:47.391 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:47.391 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:30:47.391 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:47.391 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:30:47.391 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:30:47.391 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:47.391 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:47.391 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.391 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:47.391 05:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.938 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:49.938 00:30:49.938 real 0m9.628s 00:30:49.938 user 0m8.666s 00:30:49.938 sys 0m4.965s 00:30:49.938 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:49.938 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:49.938 ************************************ 00:30:49.938 END TEST nvmf_bdevio 00:30:49.938 ************************************ 00:30:49.938 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:30:49.938 00:30:49.938 real 4m26.822s 00:30:49.938 user 9m6.453s 00:30:49.938 sys 1m46.572s 00:30:49.938 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:49.938 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:49.938 ************************************ 00:30:49.938 END TEST nvmf_target_core_interrupt_mode 00:30:49.938 ************************************ 00:30:49.938 05:25:26 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:49.938 05:25:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:49.938 05:25:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:49.938 05:25:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:49.938 ************************************ 00:30:49.938 START TEST nvmf_interrupt 00:30:49.938 ************************************ 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:49.938 * Looking for test storage... 00:30:49.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:30:49.938 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:49.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.939 --rc genhtml_branch_coverage=1 00:30:49.939 --rc genhtml_function_coverage=1 00:30:49.939 --rc genhtml_legend=1 00:30:49.939 --rc geninfo_all_blocks=1 00:30:49.939 --rc geninfo_unexecuted_blocks=1 00:30:49.939 00:30:49.939 ' 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:49.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.939 --rc genhtml_branch_coverage=1 00:30:49.939 --rc genhtml_function_coverage=1 00:30:49.939 --rc genhtml_legend=1 00:30:49.939 --rc geninfo_all_blocks=1 00:30:49.939 --rc geninfo_unexecuted_blocks=1 00:30:49.939 00:30:49.939 ' 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:49.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.939 --rc genhtml_branch_coverage=1 00:30:49.939 --rc genhtml_function_coverage=1 00:30:49.939 --rc genhtml_legend=1 00:30:49.939 --rc geninfo_all_blocks=1 00:30:49.939 --rc geninfo_unexecuted_blocks=1 00:30:49.939 00:30:49.939 ' 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:49.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.939 --rc genhtml_branch_coverage=1 00:30:49.939 --rc genhtml_function_coverage=1 00:30:49.939 --rc genhtml_legend=1 00:30:49.939 --rc geninfo_all_blocks=1 00:30:49.939 --rc geninfo_unexecuted_blocks=1 00:30:49.939 00:30:49.939 ' 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:30:49.939 05:25:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:55.205 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:55.205 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:55.205 Found net devices under 0000:86:00.0: cvl_0_0 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:55.205 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:55.206 Found net devices under 0000:86:00.1: cvl_0_1 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:55.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:55.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:30:55.206 00:30:55.206 --- 10.0.0.2 ping statistics --- 00:30:55.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.206 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:55.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:55.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:30:55.206 00:30:55.206 --- 10.0.0.1 ping statistics --- 00:30:55.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.206 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3816857 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3816857 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3816857 ']' 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:55.206 05:25:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:55.206 [2024-12-09 05:25:31.769559] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:55.206 [2024-12-09 05:25:31.770512] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:30:55.206 [2024-12-09 05:25:31.770549] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:55.206 [2024-12-09 05:25:31.838906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:55.464 [2024-12-09 05:25:31.882530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:55.464 [2024-12-09 05:25:31.882566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:55.464 [2024-12-09 05:25:31.882574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:55.464 [2024-12-09 05:25:31.882580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:55.464 [2024-12-09 05:25:31.882586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:55.464 [2024-12-09 05:25:31.883708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.464 [2024-12-09 05:25:31.883712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.464 [2024-12-09 05:25:31.952303] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:55.464 [2024-12-09 05:25:31.952465] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:55.464 [2024-12-09 05:25:31.952543] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:55.464 05:25:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:55.464 05:25:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:30:55.464 05:25:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:55.464 05:25:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:55.464 05:25:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:55.464 05:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:55.464 05:25:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:30:55.464 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:30:55.464 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:30:55.464 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:30:55.464 5000+0 records in 00:30:55.464 5000+0 records out 00:30:55.464 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0180679 s, 567 MB/s 00:30:55.464 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:30:55.464 05:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.464 05:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:55.464 AIO0 00:30:55.464 05:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.464 05:25:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:30:55.464 05:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.464 05:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:55.464 [2024-12-09 05:25:32.100255] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:55.464 05:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.464 05:25:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:55.464 05:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.464 05:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:55.723 [2024-12-09 05:25:32.136862] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3816857 0 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3816857 0 idle 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3816857 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3816857 -w 256 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3816857 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.23 reactor_0' 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3816857 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.23 reactor_0 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3816857 1 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3816857 1 idle 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3816857 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3816857 -w 256 00:30:55.723 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:55.981 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3816893 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:30:55.981 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3816893 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3817112 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3816857 0 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3816857 0 busy 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3816857 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3816857 -w 256 00:30:55.982 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:56.240 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3816857 root 20 0 128.2g 47616 34560 R 93.3 0.0 0:00.37 reactor_0' 00:30:56.240 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3816857 root 20 0 128.2g 47616 34560 R 93.3 0.0 0:00.37 reactor_0 00:30:56.240 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:56.240 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:56.240 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:30:56.240 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:30:56.240 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:56.240 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:56.240 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:56.240 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3816857 1 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3816857 1 busy 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3816857 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3816857 -w 256 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3816893 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.24 reactor_1' 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3816893 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.24 reactor_1 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:56.241 05:25:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3817112 00:31:06.223 Initializing NVMe Controllers 00:31:06.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:06.223 Controller IO queue size 256, less than required. 00:31:06.223 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:06.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:06.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:06.223 Initialization complete. Launching workers. 00:31:06.223 ======================================================== 00:31:06.223 Latency(us) 00:31:06.223 Device Information : IOPS MiB/s Average min max 00:31:06.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16038.90 62.65 15971.05 2807.61 56668.68 00:31:06.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 15932.50 62.24 16075.93 4331.67 20050.55 00:31:06.223 ======================================================== 00:31:06.223 Total : 31971.40 124.89 16023.31 2807.61 56668.68 00:31:06.223 00:31:06.223 05:25:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:06.223 05:25:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3816857 0 00:31:06.223 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3816857 0 idle 00:31:06.223 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3816857 00:31:06.223 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:06.223 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:06.223 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:06.223 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:06.223 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:06.223 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:06.223 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:06.223 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:06.223 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:06.223 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3816857 -w 256 00:31:06.223 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3816857 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.23 reactor_0' 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3816857 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.23 reactor_0 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3816857 1 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3816857 1 idle 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3816857 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3816857 -w 256 00:31:06.483 05:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:06.483 05:25:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3816893 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:31:06.483 05:25:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3816893 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:31:06.483 05:25:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:06.483 05:25:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:06.483 05:25:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:06.483 05:25:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:06.483 05:25:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:06.483 05:25:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:06.483 05:25:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:06.483 05:25:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:06.483 05:25:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:07.052 05:25:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:31:07.052 05:25:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:31:07.052 05:25:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:07.052 05:25:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:07.052 05:25:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:31:08.956 05:25:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:08.956 05:25:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:08.956 05:25:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:08.956 05:25:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:08.956 05:25:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:08.956 05:25:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:31:08.956 05:25:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:08.956 05:25:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3816857 0 00:31:08.956 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3816857 0 idle 00:31:08.956 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3816857 00:31:08.956 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:08.956 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:08.956 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:08.956 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:08.956 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:08.956 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:08.956 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:08.956 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:08.956 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:08.956 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3816857 -w 256 00:31:08.956 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3816857 root 20 0 128.2g 73728 34560 S 6.7 0.0 0:20.43 reactor_0' 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3816857 root 20 0 128.2g 73728 34560 S 6.7 0.0 0:20.43 reactor_0 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3816857 1 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3816857 1 idle 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3816857 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3816857 -w 256 00:31:09.215 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:09.474 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3816893 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.07 reactor_1' 00:31:09.474 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3816893 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.07 reactor_1 00:31:09.474 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:09.474 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:09.474 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:09.474 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:09.474 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:09.474 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:09.474 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:09.474 05:25:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:09.474 05:25:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:09.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:09.474 05:25:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:09.474 05:25:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:31:09.474 05:25:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:09.474 05:25:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:09.474 05:25:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:09.474 05:25:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:09.733 rmmod nvme_tcp 00:31:09.733 rmmod nvme_fabrics 00:31:09.733 rmmod nvme_keyring 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3816857 ']' 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3816857 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3816857 ']' 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3816857 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3816857 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3816857' 00:31:09.733 killing process with pid 3816857 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3816857 00:31:09.733 05:25:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3816857 00:31:09.992 05:25:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:09.992 05:25:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:09.992 05:25:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:09.992 05:25:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:31:09.992 05:25:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:31:09.992 05:25:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:09.992 05:25:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:31:09.992 05:25:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:09.992 05:25:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:09.992 05:25:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.992 05:25:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:09.992 05:25:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.895 05:25:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:11.895 00:31:11.895 real 0m22.408s 00:31:11.895 user 0m39.542s 00:31:11.895 sys 0m7.954s 00:31:11.895 05:25:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:11.895 05:25:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:11.895 ************************************ 00:31:11.895 END TEST nvmf_interrupt 00:31:11.895 ************************************ 00:31:12.153 00:31:12.153 real 26m52.397s 00:31:12.153 user 56m10.256s 00:31:12.153 sys 8m53.363s 00:31:12.153 05:25:48 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:12.153 05:25:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:12.153 ************************************ 00:31:12.153 END TEST nvmf_tcp 00:31:12.153 ************************************ 00:31:12.153 05:25:48 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:31:12.153 05:25:48 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:12.153 05:25:48 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:12.153 05:25:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:12.153 05:25:48 -- common/autotest_common.sh@10 -- # set +x 00:31:12.153 ************************************ 00:31:12.153 START TEST spdkcli_nvmf_tcp 00:31:12.153 ************************************ 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:12.153 * Looking for test storage... 00:31:12.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:12.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.153 --rc genhtml_branch_coverage=1 00:31:12.153 --rc genhtml_function_coverage=1 00:31:12.153 --rc genhtml_legend=1 00:31:12.153 --rc geninfo_all_blocks=1 00:31:12.153 --rc geninfo_unexecuted_blocks=1 00:31:12.153 00:31:12.153 ' 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:12.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.153 --rc genhtml_branch_coverage=1 00:31:12.153 --rc genhtml_function_coverage=1 00:31:12.153 --rc genhtml_legend=1 00:31:12.153 --rc geninfo_all_blocks=1 00:31:12.153 --rc geninfo_unexecuted_blocks=1 00:31:12.153 00:31:12.153 ' 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:12.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.153 --rc genhtml_branch_coverage=1 00:31:12.153 --rc genhtml_function_coverage=1 00:31:12.153 --rc genhtml_legend=1 00:31:12.153 --rc geninfo_all_blocks=1 00:31:12.153 --rc geninfo_unexecuted_blocks=1 00:31:12.153 00:31:12.153 ' 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:12.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.153 --rc genhtml_branch_coverage=1 00:31:12.153 --rc genhtml_function_coverage=1 00:31:12.153 --rc genhtml_legend=1 00:31:12.153 --rc geninfo_all_blocks=1 00:31:12.153 --rc geninfo_unexecuted_blocks=1 00:31:12.153 00:31:12.153 ' 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:12.153 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:12.154 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:12.154 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:12.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3819799 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3819799 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3819799 ']' 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:12.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:12.415 05:25:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:12.415 [2024-12-09 05:25:48.867638] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:31:12.415 [2024-12-09 05:25:48.867690] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3819799 ] 00:31:12.415 [2024-12-09 05:25:48.932176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:12.415 [2024-12-09 05:25:48.979592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.415 [2024-12-09 05:25:48.979597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.740 05:25:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:12.740 05:25:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:31:12.740 05:25:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:12.740 05:25:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:12.740 05:25:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:12.740 05:25:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:12.740 05:25:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:12.740 05:25:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:12.740 05:25:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:12.740 05:25:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:12.740 05:25:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:12.740 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:12.740 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:12.740 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:12.740 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:12.740 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:12.740 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:12.740 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:12.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:12.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:12.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:12.740 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:12.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:12.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:12.740 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:12.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:12.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:12.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:12.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:12.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:12.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:12.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:12.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:12.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:12.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:12.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:12.740 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:12.740 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:12.740 ' 00:31:15.312 [2024-12-09 05:25:51.600007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:16.260 [2024-12-09 05:25:52.820144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:18.784 [2024-12-09 05:25:55.067067] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:20.678 [2024-12-09 05:25:56.997086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:22.051 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:22.051 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:22.051 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:22.051 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:22.051 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:22.051 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:22.051 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:22.051 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:22.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:22.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:22.052 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:22.052 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:22.052 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:22.052 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:22.052 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:22.052 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:22.052 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:22.052 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:22.052 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:22.052 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:22.052 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:22.052 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:22.052 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:22.052 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:22.052 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:22.052 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:22.052 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:22.052 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:22.052 05:25:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:22.052 05:25:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:22.052 05:25:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:22.052 05:25:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:22.052 05:25:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:22.052 05:25:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:22.052 05:25:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:22.052 05:25:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:22.617 05:25:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:22.617 05:25:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:22.617 05:25:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:22.617 05:25:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:22.617 05:25:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:22.617 05:25:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:22.617 05:25:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:22.617 05:25:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:22.617 05:25:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:22.618 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:22.618 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:22.618 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:22.618 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:22.618 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:22.618 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:22.618 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:22.618 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:22.618 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:22.618 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:22.618 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:22.618 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:22.618 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:22.618 ' 00:31:27.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:27.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:27.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:27.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:27.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:27.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:27.882 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:27.882 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:27.882 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:27.882 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:27.882 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:27.882 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:27.882 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:27.882 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:27.882 05:26:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:27.882 05:26:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:27.882 05:26:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:27.882 05:26:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3819799 00:31:27.882 05:26:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3819799 ']' 00:31:27.882 05:26:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3819799 00:31:27.882 05:26:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:31:27.882 05:26:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:27.882 05:26:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3819799 00:31:27.882 05:26:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:27.882 05:26:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:27.882 05:26:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3819799' 00:31:27.882 killing process with pid 3819799 00:31:27.882 05:26:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3819799 00:31:27.882 05:26:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3819799 00:31:28.140 05:26:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:28.140 05:26:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:28.140 05:26:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3819799 ']' 00:31:28.140 05:26:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3819799 00:31:28.140 05:26:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3819799 ']' 00:31:28.140 05:26:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3819799 00:31:28.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3819799) - No such process 00:31:28.140 05:26:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3819799 is not found' 00:31:28.140 Process with pid 3819799 is not found 00:31:28.140 05:26:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:28.140 05:26:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:28.140 05:26:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:28.140 00:31:28.140 real 0m15.950s 00:31:28.140 user 0m33.345s 00:31:28.140 sys 0m0.671s 00:31:28.140 05:26:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:28.140 05:26:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:28.140 ************************************ 00:31:28.140 END TEST spdkcli_nvmf_tcp 00:31:28.140 ************************************ 00:31:28.141 05:26:04 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:28.141 05:26:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:28.141 05:26:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:28.141 05:26:04 -- common/autotest_common.sh@10 -- # set +x 00:31:28.141 ************************************ 00:31:28.141 START TEST nvmf_identify_passthru 00:31:28.141 ************************************ 00:31:28.141 05:26:04 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:28.141 * Looking for test storage... 00:31:28.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:28.141 05:26:04 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:28.141 05:26:04 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:31:28.141 05:26:04 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:28.399 05:26:04 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:28.399 05:26:04 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:31:28.400 05:26:04 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:31:28.400 05:26:04 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:28.400 05:26:04 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:28.400 05:26:04 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:31:28.400 05:26:04 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:28.400 05:26:04 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:28.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.400 --rc genhtml_branch_coverage=1 00:31:28.400 --rc genhtml_function_coverage=1 00:31:28.400 --rc genhtml_legend=1 00:31:28.400 --rc geninfo_all_blocks=1 00:31:28.400 --rc geninfo_unexecuted_blocks=1 00:31:28.400 00:31:28.400 ' 00:31:28.400 05:26:04 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:28.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.400 --rc genhtml_branch_coverage=1 00:31:28.400 --rc genhtml_function_coverage=1 00:31:28.400 --rc genhtml_legend=1 00:31:28.400 --rc geninfo_all_blocks=1 00:31:28.400 --rc geninfo_unexecuted_blocks=1 00:31:28.400 00:31:28.400 ' 00:31:28.400 05:26:04 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:28.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.400 --rc genhtml_branch_coverage=1 00:31:28.400 --rc genhtml_function_coverage=1 00:31:28.400 --rc genhtml_legend=1 00:31:28.400 --rc geninfo_all_blocks=1 00:31:28.400 --rc geninfo_unexecuted_blocks=1 00:31:28.400 00:31:28.400 ' 00:31:28.400 05:26:04 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:28.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.400 --rc genhtml_branch_coverage=1 00:31:28.400 --rc genhtml_function_coverage=1 00:31:28.400 --rc genhtml_legend=1 00:31:28.400 --rc geninfo_all_blocks=1 00:31:28.400 --rc geninfo_unexecuted_blocks=1 00:31:28.400 00:31:28.400 ' 00:31:28.400 05:26:04 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:28.400 05:26:04 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:28.400 05:26:04 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:28.400 05:26:04 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:28.400 05:26:04 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:28.400 05:26:04 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.400 05:26:04 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.400 05:26:04 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.400 05:26:04 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:28.400 05:26:04 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:28.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:28.400 05:26:04 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:28.400 05:26:04 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:28.400 05:26:04 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:28.400 05:26:04 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:28.400 05:26:04 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:28.400 05:26:04 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.400 05:26:04 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.400 05:26:04 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.400 05:26:04 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:28.400 05:26:04 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.400 05:26:04 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.400 05:26:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:28.400 05:26:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:28.400 05:26:04 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:31:28.400 05:26:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:33.668 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:33.668 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:33.668 Found net devices under 0000:86:00.0: cvl_0_0 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:33.668 Found net devices under 0000:86:00.1: cvl_0_1 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:33.668 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:33.669 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:33.669 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:33.669 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:33.669 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:33.669 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:33.669 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:33.669 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:33.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:33.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:31:33.669 00:31:33.669 --- 10.0.0.2 ping statistics --- 00:31:33.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.669 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:31:33.669 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:33.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:33.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:31:33.669 00:31:33.669 --- 10.0.0.1 ping statistics --- 00:31:33.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.669 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:31:33.669 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:33.669 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:31:33.669 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:33.669 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:33.669 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:33.669 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:33.669 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:33.669 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:33.669 05:26:10 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:33.928 05:26:10 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:33.928 05:26:10 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:33.928 05:26:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:33.928 05:26:10 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:33.928 05:26:10 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:33.928 05:26:10 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:31:33.928 05:26:10 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:33.928 05:26:10 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:33.928 05:26:10 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:33.928 05:26:10 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:31:33.928 05:26:10 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:33.928 05:26:10 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:33.928 05:26:10 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:33.928 05:26:10 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:33.928 05:26:10 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:31:33.928 05:26:10 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:31:33.928 05:26:10 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:31:33.928 05:26:10 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:31:33.928 05:26:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:33.928 05:26:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:31:33.928 05:26:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:38.115 05:26:14 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:31:38.115 05:26:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:31:38.115 05:26:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:38.115 05:26:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:42.299 05:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:31:42.299 05:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:42.299 05:26:18 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:42.299 05:26:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:42.299 05:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:42.299 05:26:18 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:42.299 05:26:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:42.299 05:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3826825 00:31:42.299 05:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:42.299 05:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:42.299 05:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3826825 00:31:42.299 05:26:18 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3826825 ']' 00:31:42.299 05:26:18 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:42.299 05:26:18 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:42.299 05:26:18 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:42.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:42.299 05:26:18 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:42.299 05:26:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:42.299 [2024-12-09 05:26:18.875900] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:31:42.299 [2024-12-09 05:26:18.875948] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:42.556 [2024-12-09 05:26:18.944800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:42.556 [2024-12-09 05:26:18.989528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:42.556 [2024-12-09 05:26:18.989566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:42.556 [2024-12-09 05:26:18.989574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:42.556 [2024-12-09 05:26:18.989581] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:42.556 [2024-12-09 05:26:18.989586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:42.556 [2024-12-09 05:26:18.991004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:42.556 [2024-12-09 05:26:18.991021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:42.556 [2024-12-09 05:26:18.991090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:42.556 [2024-12-09 05:26:18.991091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.556 05:26:19 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:42.556 05:26:19 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:31:42.557 05:26:19 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:42.557 05:26:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.557 05:26:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:42.557 INFO: Log level set to 20 00:31:42.557 INFO: Requests: 00:31:42.557 { 00:31:42.557 "jsonrpc": "2.0", 00:31:42.557 "method": "nvmf_set_config", 00:31:42.557 "id": 1, 00:31:42.557 "params": { 00:31:42.557 "admin_cmd_passthru": { 00:31:42.557 "identify_ctrlr": true 00:31:42.557 } 00:31:42.557 } 00:31:42.557 } 00:31:42.557 00:31:42.557 INFO: response: 00:31:42.557 { 00:31:42.557 "jsonrpc": "2.0", 00:31:42.557 "id": 1, 00:31:42.557 "result": true 00:31:42.557 } 00:31:42.557 00:31:42.557 05:26:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.557 05:26:19 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:42.557 05:26:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.557 05:26:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:42.557 INFO: Setting log level to 20 00:31:42.557 INFO: Setting log level to 20 00:31:42.557 INFO: Log level set to 20 00:31:42.557 INFO: Log level set to 20 00:31:42.557 INFO: Requests: 00:31:42.557 { 00:31:42.557 "jsonrpc": "2.0", 00:31:42.557 "method": "framework_start_init", 00:31:42.557 "id": 1 00:31:42.557 } 00:31:42.557 00:31:42.557 INFO: Requests: 00:31:42.557 { 00:31:42.557 "jsonrpc": "2.0", 00:31:42.557 "method": "framework_start_init", 00:31:42.557 "id": 1 00:31:42.557 } 00:31:42.557 00:31:42.557 [2024-12-09 05:26:19.121495] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:42.557 INFO: response: 00:31:42.557 { 00:31:42.557 "jsonrpc": "2.0", 00:31:42.557 "id": 1, 00:31:42.557 "result": true 00:31:42.557 } 00:31:42.557 00:31:42.557 INFO: response: 00:31:42.557 { 00:31:42.557 "jsonrpc": "2.0", 00:31:42.557 "id": 1, 00:31:42.557 "result": true 00:31:42.557 } 00:31:42.557 00:31:42.557 05:26:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.557 05:26:19 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:42.557 05:26:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.557 05:26:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:42.557 INFO: Setting log level to 40 00:31:42.557 INFO: Setting log level to 40 00:31:42.557 INFO: Setting log level to 40 00:31:42.557 [2024-12-09 05:26:19.130829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:42.557 05:26:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.557 05:26:19 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:42.557 05:26:19 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:42.557 05:26:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:42.557 05:26:19 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:31:42.557 05:26:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.557 05:26:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:45.829 Nvme0n1 00:31:45.829 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.829 05:26:22 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:45.829 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.829 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:45.829 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.829 05:26:22 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:45.829 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.829 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:45.829 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.829 05:26:22 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:45.830 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.830 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:45.830 [2024-12-09 05:26:22.049133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:45.830 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.830 05:26:22 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:45.830 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.830 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:45.830 [ 00:31:45.830 { 00:31:45.830 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:45.830 "subtype": "Discovery", 00:31:45.830 "listen_addresses": [], 00:31:45.830 "allow_any_host": true, 00:31:45.830 "hosts": [] 00:31:45.830 }, 00:31:45.830 { 00:31:45.830 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:45.830 "subtype": "NVMe", 00:31:45.830 "listen_addresses": [ 00:31:45.830 { 00:31:45.830 "trtype": "TCP", 00:31:45.830 "adrfam": "IPv4", 00:31:45.830 "traddr": "10.0.0.2", 00:31:45.830 "trsvcid": "4420" 00:31:45.830 } 00:31:45.830 ], 00:31:45.830 "allow_any_host": true, 00:31:45.830 "hosts": [], 00:31:45.830 "serial_number": "SPDK00000000000001", 00:31:45.830 "model_number": "SPDK bdev Controller", 00:31:45.830 "max_namespaces": 1, 00:31:45.830 "min_cntlid": 1, 00:31:45.830 "max_cntlid": 65519, 00:31:45.830 "namespaces": [ 00:31:45.830 { 00:31:45.830 "nsid": 1, 00:31:45.830 "bdev_name": "Nvme0n1", 00:31:45.830 "name": "Nvme0n1", 00:31:45.830 "nguid": "7625ED6CF70C4E7DBF57FC28EEA32047", 00:31:45.830 "uuid": "7625ed6c-f70c-4e7d-bf57-fc28eea32047" 00:31:45.830 } 00:31:45.830 ] 00:31:45.830 } 00:31:45.830 ] 00:31:45.830 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.830 05:26:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:45.830 05:26:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:45.830 05:26:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:45.830 05:26:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:31:45.830 05:26:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:45.830 05:26:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:45.830 05:26:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:46.088 05:26:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:31:46.088 05:26:22 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:31:46.088 05:26:22 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:31:46.088 05:26:22 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:46.088 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.088 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:46.088 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.088 05:26:22 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:46.088 05:26:22 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:46.088 05:26:22 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:46.088 05:26:22 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:31:46.088 05:26:22 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:46.088 05:26:22 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:31:46.088 05:26:22 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:46.088 05:26:22 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:46.088 rmmod nvme_tcp 00:31:46.088 rmmod nvme_fabrics 00:31:46.088 rmmod nvme_keyring 00:31:46.346 05:26:22 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:46.346 05:26:22 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:31:46.346 05:26:22 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:31:46.346 05:26:22 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3826825 ']' 00:31:46.346 05:26:22 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3826825 00:31:46.346 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3826825 ']' 00:31:46.346 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3826825 00:31:46.346 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:31:46.346 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:46.346 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3826825 00:31:46.346 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:46.346 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:46.346 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3826825' 00:31:46.346 killing process with pid 3826825 00:31:46.346 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3826825 00:31:46.346 05:26:22 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3826825 00:31:47.720 05:26:24 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:47.720 05:26:24 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:47.720 05:26:24 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:47.720 05:26:24 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:31:47.720 05:26:24 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:31:47.720 05:26:24 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:31:47.721 05:26:24 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:47.721 05:26:24 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:47.721 05:26:24 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:47.721 05:26:24 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.721 05:26:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:47.721 05:26:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.253 05:26:26 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:50.253 00:31:50.253 real 0m21.761s 00:31:50.253 user 0m27.892s 00:31:50.253 sys 0m5.842s 00:31:50.253 05:26:26 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:50.253 05:26:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:50.253 ************************************ 00:31:50.253 END TEST nvmf_identify_passthru 00:31:50.253 ************************************ 00:31:50.253 05:26:26 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:50.253 05:26:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:50.253 05:26:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:50.253 05:26:26 -- common/autotest_common.sh@10 -- # set +x 00:31:50.253 ************************************ 00:31:50.253 START TEST nvmf_dif 00:31:50.253 ************************************ 00:31:50.253 05:26:26 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:50.253 * Looking for test storage... 00:31:50.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:50.253 05:26:26 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:50.253 05:26:26 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:31:50.253 05:26:26 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:50.253 05:26:26 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:31:50.253 05:26:26 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:50.253 05:26:26 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:50.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.253 --rc genhtml_branch_coverage=1 00:31:50.253 --rc genhtml_function_coverage=1 00:31:50.253 --rc genhtml_legend=1 00:31:50.253 --rc geninfo_all_blocks=1 00:31:50.253 --rc geninfo_unexecuted_blocks=1 00:31:50.253 00:31:50.253 ' 00:31:50.253 05:26:26 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:50.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.253 --rc genhtml_branch_coverage=1 00:31:50.253 --rc genhtml_function_coverage=1 00:31:50.253 --rc genhtml_legend=1 00:31:50.253 --rc geninfo_all_blocks=1 00:31:50.253 --rc geninfo_unexecuted_blocks=1 00:31:50.253 00:31:50.253 ' 00:31:50.253 05:26:26 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:50.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.253 --rc genhtml_branch_coverage=1 00:31:50.253 --rc genhtml_function_coverage=1 00:31:50.253 --rc genhtml_legend=1 00:31:50.253 --rc geninfo_all_blocks=1 00:31:50.253 --rc geninfo_unexecuted_blocks=1 00:31:50.253 00:31:50.253 ' 00:31:50.253 05:26:26 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:50.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.253 --rc genhtml_branch_coverage=1 00:31:50.253 --rc genhtml_function_coverage=1 00:31:50.253 --rc genhtml_legend=1 00:31:50.253 --rc geninfo_all_blocks=1 00:31:50.253 --rc geninfo_unexecuted_blocks=1 00:31:50.253 00:31:50.253 ' 00:31:50.253 05:26:26 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:50.253 05:26:26 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:50.253 05:26:26 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.253 05:26:26 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.253 05:26:26 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.253 05:26:26 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:50.253 05:26:26 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:50.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:50.253 05:26:26 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:50.253 05:26:26 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:50.253 05:26:26 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:50.253 05:26:26 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:50.253 05:26:26 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.253 05:26:26 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:50.253 05:26:26 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:50.253 05:26:26 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:31:50.253 05:26:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:55.517 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:55.517 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:55.517 Found net devices under 0000:86:00.0: cvl_0_0 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:55.517 Found net devices under 0000:86:00.1: cvl_0_1 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:55.517 05:26:31 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:55.517 05:26:32 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:55.517 05:26:32 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:55.517 05:26:32 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:55.517 05:26:32 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:55.776 05:26:32 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:55.776 05:26:32 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:55.776 05:26:32 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:55.776 05:26:32 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:55.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:55.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:31:55.776 00:31:55.776 --- 10.0.0.2 ping statistics --- 00:31:55.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.776 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:31:55.776 05:26:32 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:55.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:55.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:31:55.777 00:31:55.777 --- 10.0.0.1 ping statistics --- 00:31:55.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.777 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:31:55.777 05:26:32 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:55.777 05:26:32 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:31:55.777 05:26:32 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:31:55.777 05:26:32 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:58.309 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:31:58.309 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:58.309 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:31:58.309 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:31:58.309 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:31:58.309 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:31:58.309 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:31:58.309 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:31:58.309 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:31:58.309 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:31:58.309 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:31:58.309 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:31:58.309 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:31:58.309 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:31:58.310 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:31:58.310 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:31:58.310 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:31:58.310 05:26:34 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:58.310 05:26:34 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:58.310 05:26:34 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:58.310 05:26:34 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:58.310 05:26:34 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:58.310 05:26:34 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:58.310 05:26:34 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:58.310 05:26:34 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:58.310 05:26:34 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:58.310 05:26:34 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:58.310 05:26:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:58.310 05:26:34 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3832294 00:31:58.310 05:26:34 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3832294 00:31:58.310 05:26:34 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:58.310 05:26:34 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3832294 ']' 00:31:58.310 05:26:34 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:58.310 05:26:34 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:58.310 05:26:34 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:58.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:58.310 05:26:34 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:58.310 05:26:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:58.310 [2024-12-09 05:26:34.667263] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:31:58.310 [2024-12-09 05:26:34.667307] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:58.310 [2024-12-09 05:26:34.735670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.310 [2024-12-09 05:26:34.776758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:58.310 [2024-12-09 05:26:34.776791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:58.310 [2024-12-09 05:26:34.776798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:58.310 [2024-12-09 05:26:34.776804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:58.310 [2024-12-09 05:26:34.776809] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:58.310 [2024-12-09 05:26:34.777382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.310 05:26:34 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:58.310 05:26:34 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:31:58.310 05:26:34 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:58.310 05:26:34 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:58.310 05:26:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:58.310 05:26:34 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:58.310 05:26:34 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:58.310 05:26:34 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:58.310 05:26:34 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.310 05:26:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:58.310 [2024-12-09 05:26:34.908826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:58.310 05:26:34 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.310 05:26:34 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:58.310 05:26:34 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:58.310 05:26:34 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:58.310 05:26:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:58.310 ************************************ 00:31:58.310 START TEST fio_dif_1_default 00:31:58.310 ************************************ 00:31:58.310 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:31:58.310 05:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:58.310 05:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:58.310 05:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:58.310 05:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:58.310 05:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:58.310 05:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:58.310 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.310 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:58.570 bdev_null0 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:58.570 [2024-12-09 05:26:34.981149] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:58.570 { 00:31:58.570 "params": { 00:31:58.570 "name": "Nvme$subsystem", 00:31:58.570 "trtype": "$TEST_TRANSPORT", 00:31:58.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:58.570 "adrfam": "ipv4", 00:31:58.570 "trsvcid": "$NVMF_PORT", 00:31:58.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:58.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:58.570 "hdgst": ${hdgst:-false}, 00:31:58.570 "ddgst": ${ddgst:-false} 00:31:58.570 }, 00:31:58.570 "method": "bdev_nvme_attach_controller" 00:31:58.570 } 00:31:58.570 EOF 00:31:58.570 )") 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:31:58.570 05:26:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:58.570 "params": { 00:31:58.570 "name": "Nvme0", 00:31:58.570 "trtype": "tcp", 00:31:58.570 "traddr": "10.0.0.2", 00:31:58.570 "adrfam": "ipv4", 00:31:58.570 "trsvcid": "4420", 00:31:58.570 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:58.570 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:58.570 "hdgst": false, 00:31:58.570 "ddgst": false 00:31:58.570 }, 00:31:58.570 "method": "bdev_nvme_attach_controller" 00:31:58.570 }' 00:31:58.570 05:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:58.570 05:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:58.570 05:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:58.570 05:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:58.570 05:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:58.570 05:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:58.570 05:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:58.570 05:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:58.570 05:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:58.570 05:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:58.828 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:58.828 fio-3.35 00:31:58.828 Starting 1 thread 00:32:11.023 00:32:11.023 filename0: (groupid=0, jobs=1): err= 0: pid=3832662: Mon Dec 9 05:26:45 2024 00:32:11.023 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10003msec) 00:32:11.023 slat (nsec): min=6024, max=26171, avg=6316.06, stdev=767.40 00:32:11.023 clat (usec): min=487, max=44730, avg=21085.59, stdev=20472.79 00:32:11.023 lat (usec): min=493, max=44757, avg=21091.90, stdev=20472.75 00:32:11.023 clat percentiles (usec): 00:32:11.023 | 1.00th=[ 494], 5.00th=[ 502], 10.00th=[ 506], 20.00th=[ 515], 00:32:11.023 | 30.00th=[ 523], 40.00th=[ 553], 50.00th=[41157], 60.00th=[41157], 00:32:11.023 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:32:11.023 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:32:11.023 | 99.99th=[44827] 00:32:11.023 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=759.58, stdev=25.78, samples=19 00:32:11.023 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:32:11.023 lat (usec) : 500=4.48%, 750=45.31% 00:32:11.023 lat (msec) : 50=50.21% 00:32:11.023 cpu : usr=91.70%, sys=8.05%, ctx=14, majf=0, minf=0 00:32:11.023 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:11.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.023 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.023 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:11.023 00:32:11.023 Run status group 0 (all jobs): 00:32:11.023 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7584KiB (7766kB), run=10003-10003msec 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.023 00:32:11.023 real 0m11.244s 00:32:11.023 user 0m15.971s 00:32:11.023 sys 0m1.094s 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:11.023 ************************************ 00:32:11.023 END TEST fio_dif_1_default 00:32:11.023 ************************************ 00:32:11.023 05:26:46 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:11.023 05:26:46 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:11.023 05:26:46 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:11.023 05:26:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:11.023 ************************************ 00:32:11.023 START TEST fio_dif_1_multi_subsystems 00:32:11.023 ************************************ 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:11.023 bdev_null0 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:11.023 [2024-12-09 05:26:46.298033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:11.023 bdev_null1 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:11.023 { 00:32:11.023 "params": { 00:32:11.023 "name": "Nvme$subsystem", 00:32:11.023 "trtype": "$TEST_TRANSPORT", 00:32:11.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:11.023 "adrfam": "ipv4", 00:32:11.023 "trsvcid": "$NVMF_PORT", 00:32:11.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:11.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:11.023 "hdgst": ${hdgst:-false}, 00:32:11.023 "ddgst": ${ddgst:-false} 00:32:11.023 }, 00:32:11.023 "method": "bdev_nvme_attach_controller" 00:32:11.023 } 00:32:11.023 EOF 00:32:11.023 )") 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:11.023 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:11.023 { 00:32:11.023 "params": { 00:32:11.023 "name": "Nvme$subsystem", 00:32:11.023 "trtype": "$TEST_TRANSPORT", 00:32:11.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:11.024 "adrfam": "ipv4", 00:32:11.024 "trsvcid": "$NVMF_PORT", 00:32:11.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:11.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:11.024 "hdgst": ${hdgst:-false}, 00:32:11.024 "ddgst": ${ddgst:-false} 00:32:11.024 }, 00:32:11.024 "method": "bdev_nvme_attach_controller" 00:32:11.024 } 00:32:11.024 EOF 00:32:11.024 )") 00:32:11.024 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:32:11.024 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:11.024 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:32:11.024 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:32:11.024 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:32:11.024 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:11.024 "params": { 00:32:11.024 "name": "Nvme0", 00:32:11.024 "trtype": "tcp", 00:32:11.024 "traddr": "10.0.0.2", 00:32:11.024 "adrfam": "ipv4", 00:32:11.024 "trsvcid": "4420", 00:32:11.024 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:11.024 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:11.024 "hdgst": false, 00:32:11.024 "ddgst": false 00:32:11.024 }, 00:32:11.024 "method": "bdev_nvme_attach_controller" 00:32:11.024 },{ 00:32:11.024 "params": { 00:32:11.024 "name": "Nvme1", 00:32:11.024 "trtype": "tcp", 00:32:11.024 "traddr": "10.0.0.2", 00:32:11.024 "adrfam": "ipv4", 00:32:11.024 "trsvcid": "4420", 00:32:11.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:11.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:11.024 "hdgst": false, 00:32:11.024 "ddgst": false 00:32:11.024 }, 00:32:11.024 "method": "bdev_nvme_attach_controller" 00:32:11.024 }' 00:32:11.024 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:11.024 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:11.024 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:11.024 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:11.024 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:11.024 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:11.024 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:11.024 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:11.024 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:11.024 05:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:11.024 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:11.024 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:11.024 fio-3.35 00:32:11.024 Starting 2 threads 00:32:20.985 00:32:20.985 filename0: (groupid=0, jobs=1): err= 0: pid=3834628: Mon Dec 9 05:26:57 2024 00:32:20.985 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10039msec) 00:32:20.985 slat (nsec): min=6103, max=94402, avg=7471.41, stdev=2925.70 00:32:20.985 clat (usec): min=453, max=43303, avg=21067.81, stdev=20416.87 00:32:20.985 lat (usec): min=459, max=43324, avg=21075.28, stdev=20416.28 00:32:20.985 clat percentiles (usec): 00:32:20.985 | 1.00th=[ 490], 5.00th=[ 498], 10.00th=[ 506], 20.00th=[ 545], 00:32:20.985 | 30.00th=[ 562], 40.00th=[ 603], 50.00th=[40633], 60.00th=[41157], 00:32:20.985 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:32:20.985 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:32:20.985 | 99.99th=[43254] 00:32:20.985 bw ( KiB/s): min= 672, max= 768, per=50.10%, avg=760.00, stdev=25.16, samples=20 00:32:20.985 iops : min= 168, max= 192, avg=190.00, stdev= 6.29, samples=20 00:32:20.985 lat (usec) : 500=6.46%, 750=41.23%, 1000=0.32% 00:32:20.985 lat (msec) : 2=1.79%, 50=50.21% 00:32:20.985 cpu : usr=96.74%, sys=3.01%, ctx=18, majf=0, minf=108 00:32:20.985 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:20.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.985 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:20.985 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:20.985 filename1: (groupid=0, jobs=1): err= 0: pid=3834629: Mon Dec 9 05:26:57 2024 00:32:20.985 read: IOPS=189, BW=758KiB/s (777kB/s)(7616KiB/10041msec) 00:32:20.985 slat (nsec): min=6085, max=25187, avg=7443.62, stdev=2181.54 00:32:20.985 clat (usec): min=503, max=42475, avg=21071.96, stdev=20384.25 00:32:20.985 lat (usec): min=509, max=42482, avg=21079.41, stdev=20383.65 00:32:20.985 clat percentiles (usec): 00:32:20.985 | 1.00th=[ 523], 5.00th=[ 537], 10.00th=[ 537], 20.00th=[ 553], 00:32:20.985 | 30.00th=[ 570], 40.00th=[ 644], 50.00th=[40633], 60.00th=[41157], 00:32:20.985 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:32:20.985 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:32:20.985 | 99.99th=[42730] 00:32:20.985 bw ( KiB/s): min= 672, max= 768, per=50.10%, avg=760.00, stdev=25.16, samples=20 00:32:20.985 iops : min= 168, max= 192, avg=190.00, stdev= 6.29, samples=20 00:32:20.985 lat (usec) : 750=46.32%, 1000=1.58% 00:32:20.985 lat (msec) : 2=1.89%, 50=50.21% 00:32:20.985 cpu : usr=97.00%, sys=2.75%, ctx=10, majf=0, minf=151 00:32:20.985 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:20.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.985 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:20.985 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:20.985 00:32:20.985 Run status group 0 (all jobs): 00:32:20.985 READ: bw=1517KiB/s (1553kB/s), 758KiB/s-759KiB/s (777kB/s-777kB/s), io=14.9MiB (15.6MB), run=10039-10041msec 00:32:21.244 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:21.244 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:32:21.244 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:21.244 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:21.244 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:32:21.244 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:21.244 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.244 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:21.244 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.244 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:21.244 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.244 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:21.244 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.244 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:21.244 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:21.244 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:32:21.244 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:21.245 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.245 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:21.245 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.245 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:21.245 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.245 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:21.245 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.245 00:32:21.245 real 0m11.461s 00:32:21.245 user 0m26.023s 00:32:21.245 sys 0m0.879s 00:32:21.245 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:21.245 05:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:21.245 ************************************ 00:32:21.245 END TEST fio_dif_1_multi_subsystems 00:32:21.245 ************************************ 00:32:21.245 05:26:57 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:21.245 05:26:57 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:21.245 05:26:57 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:21.245 05:26:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:21.245 ************************************ 00:32:21.245 START TEST fio_dif_rand_params 00:32:21.245 ************************************ 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:21.245 bdev_null0 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:21.245 [2024-12-09 05:26:57.835626] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:21.245 { 00:32:21.245 "params": { 00:32:21.245 "name": "Nvme$subsystem", 00:32:21.245 "trtype": "$TEST_TRANSPORT", 00:32:21.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:21.245 "adrfam": "ipv4", 00:32:21.245 "trsvcid": "$NVMF_PORT", 00:32:21.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:21.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:21.245 "hdgst": ${hdgst:-false}, 00:32:21.245 "ddgst": ${ddgst:-false} 00:32:21.245 }, 00:32:21.245 "method": "bdev_nvme_attach_controller" 00:32:21.245 } 00:32:21.245 EOF 00:32:21.245 )") 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:21.245 "params": { 00:32:21.245 "name": "Nvme0", 00:32:21.245 "trtype": "tcp", 00:32:21.245 "traddr": "10.0.0.2", 00:32:21.245 "adrfam": "ipv4", 00:32:21.245 "trsvcid": "4420", 00:32:21.245 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:21.245 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:21.245 "hdgst": false, 00:32:21.245 "ddgst": false 00:32:21.245 }, 00:32:21.245 "method": "bdev_nvme_attach_controller" 00:32:21.245 }' 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:21.245 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:21.529 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:21.530 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:21.530 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:21.530 05:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:21.788 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:21.788 ... 00:32:21.788 fio-3.35 00:32:21.788 Starting 3 threads 00:32:28.366 00:32:28.366 filename0: (groupid=0, jobs=1): err= 0: pid=3836427: Mon Dec 9 05:27:03 2024 00:32:28.366 read: IOPS=264, BW=33.0MiB/s (34.6MB/s)(165MiB/5005msec) 00:32:28.366 slat (nsec): min=6337, max=26592, avg=10489.40, stdev=2449.37 00:32:28.366 clat (usec): min=3666, max=91936, avg=11332.67, stdev=10537.14 00:32:28.366 lat (usec): min=3673, max=91949, avg=11343.16, stdev=10537.17 00:32:28.366 clat percentiles (usec): 00:32:28.366 | 1.00th=[ 4113], 5.00th=[ 4686], 10.00th=[ 5800], 20.00th=[ 6521], 00:32:28.366 | 30.00th=[ 6980], 40.00th=[ 7898], 50.00th=[ 9110], 60.00th=[ 9896], 00:32:28.366 | 70.00th=[10683], 80.00th=[11600], 90.00th=[12780], 95.00th=[46924], 00:32:28.366 | 99.00th=[51119], 99.50th=[52167], 99.90th=[91751], 99.95th=[91751], 00:32:28.366 | 99.99th=[91751] 00:32:28.366 bw ( KiB/s): min=22016, max=42496, per=31.11%, avg=33817.60, stdev=7543.63, samples=10 00:32:28.366 iops : min= 172, max= 332, avg=264.20, stdev=58.93, samples=10 00:32:28.366 lat (msec) : 4=0.53%, 10=60.02%, 20=33.11%, 50=5.14%, 100=1.21% 00:32:28.366 cpu : usr=94.60%, sys=5.08%, ctx=15, majf=0, minf=45 00:32:28.366 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:28.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.366 issued rwts: total=1323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.366 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:28.366 filename0: (groupid=0, jobs=1): err= 0: pid=3836428: Mon Dec 9 05:27:03 2024 00:32:28.366 read: IOPS=325, BW=40.6MiB/s (42.6MB/s)(205MiB/5043msec) 00:32:28.366 slat (nsec): min=6350, max=27726, avg=10591.39, stdev=2603.50 00:32:28.366 clat (usec): min=3681, max=87200, avg=9192.38, stdev=7092.87 00:32:28.366 lat (usec): min=3688, max=87213, avg=9202.97, stdev=7093.03 00:32:28.366 clat percentiles (usec): 00:32:28.366 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 5211], 20.00th=[ 6390], 00:32:28.366 | 30.00th=[ 6783], 40.00th=[ 7373], 50.00th=[ 8094], 60.00th=[ 8717], 00:32:28.366 | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[11207], 95.00th=[12125], 00:32:28.366 | 99.00th=[48497], 99.50th=[49546], 99.90th=[52167], 99.95th=[87557], 00:32:28.366 | 99.99th=[87557] 00:32:28.366 bw ( KiB/s): min=33536, max=49664, per=38.56%, avg=41907.20, stdev=5187.89, samples=10 00:32:28.366 iops : min= 262, max= 388, avg=327.40, stdev=40.53, samples=10 00:32:28.366 lat (msec) : 4=1.65%, 10=77.06%, 20=18.49%, 50=2.32%, 100=0.49% 00:32:28.366 cpu : usr=93.87%, sys=5.81%, ctx=12, majf=0, minf=21 00:32:28.366 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:28.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.366 issued rwts: total=1639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.366 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:28.366 filename0: (groupid=0, jobs=1): err= 0: pid=3836430: Mon Dec 9 05:27:03 2024 00:32:28.366 read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(165MiB/5043msec) 00:32:28.366 slat (nsec): min=6332, max=32061, avg=10541.46, stdev=2793.17 00:32:28.366 clat (usec): min=3534, max=52005, avg=11416.05, stdev=11206.03 00:32:28.366 lat (usec): min=3541, max=52018, avg=11426.60, stdev=11206.03 00:32:28.366 clat percentiles (usec): 00:32:28.366 | 1.00th=[ 4080], 5.00th=[ 4555], 10.00th=[ 5669], 20.00th=[ 6587], 00:32:28.366 | 30.00th=[ 7242], 40.00th=[ 7832], 50.00th=[ 8291], 60.00th=[ 8848], 00:32:28.366 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[11994], 95.00th=[47449], 00:32:28.366 | 99.00th=[49546], 99.50th=[50070], 99.90th=[52167], 99.95th=[52167], 00:32:28.366 | 99.99th=[52167] 00:32:28.366 bw ( KiB/s): min=26368, max=40704, per=31.04%, avg=33740.80, stdev=4479.47, samples=10 00:32:28.366 iops : min= 206, max= 318, avg=263.60, stdev=35.00, samples=10 00:32:28.366 lat (msec) : 4=0.45%, 10=76.59%, 20=14.62%, 50=7.50%, 100=0.83% 00:32:28.366 cpu : usr=93.83%, sys=5.85%, ctx=16, majf=0, minf=84 00:32:28.366 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:28.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.366 issued rwts: total=1320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.366 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:28.366 00:32:28.366 Run status group 0 (all jobs): 00:32:28.366 READ: bw=106MiB/s (111MB/s), 32.7MiB/s-40.6MiB/s (34.3MB/s-42.6MB/s), io=535MiB (561MB), run=5005-5043msec 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.366 bdev_null0 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.366 [2024-12-09 05:27:03.975630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:28.366 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:28.367 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:28.367 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.367 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.367 bdev_null1 00:32:28.367 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.367 05:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:28.367 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.367 05:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.367 bdev_null2 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:28.367 { 00:32:28.367 "params": { 00:32:28.367 "name": "Nvme$subsystem", 00:32:28.367 "trtype": "$TEST_TRANSPORT", 00:32:28.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:28.367 "adrfam": "ipv4", 00:32:28.367 "trsvcid": "$NVMF_PORT", 00:32:28.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:28.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:28.367 "hdgst": ${hdgst:-false}, 00:32:28.367 "ddgst": ${ddgst:-false} 00:32:28.367 }, 00:32:28.367 "method": "bdev_nvme_attach_controller" 00:32:28.367 } 00:32:28.367 EOF 00:32:28.367 )") 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:28.367 { 00:32:28.367 "params": { 00:32:28.367 "name": "Nvme$subsystem", 00:32:28.367 "trtype": "$TEST_TRANSPORT", 00:32:28.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:28.367 "adrfam": "ipv4", 00:32:28.367 "trsvcid": "$NVMF_PORT", 00:32:28.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:28.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:28.367 "hdgst": ${hdgst:-false}, 00:32:28.367 "ddgst": ${ddgst:-false} 00:32:28.367 }, 00:32:28.367 "method": "bdev_nvme_attach_controller" 00:32:28.367 } 00:32:28.367 EOF 00:32:28.367 )") 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:28.367 { 00:32:28.367 "params": { 00:32:28.367 "name": "Nvme$subsystem", 00:32:28.367 "trtype": "$TEST_TRANSPORT", 00:32:28.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:28.367 "adrfam": "ipv4", 00:32:28.367 "trsvcid": "$NVMF_PORT", 00:32:28.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:28.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:28.367 "hdgst": ${hdgst:-false}, 00:32:28.367 "ddgst": ${ddgst:-false} 00:32:28.367 }, 00:32:28.367 "method": "bdev_nvme_attach_controller" 00:32:28.367 } 00:32:28.367 EOF 00:32:28.367 )") 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:28.367 05:27:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:28.367 "params": { 00:32:28.367 "name": "Nvme0", 00:32:28.367 "trtype": "tcp", 00:32:28.367 "traddr": "10.0.0.2", 00:32:28.367 "adrfam": "ipv4", 00:32:28.367 "trsvcid": "4420", 00:32:28.367 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:28.367 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:28.367 "hdgst": false, 00:32:28.367 "ddgst": false 00:32:28.367 }, 00:32:28.367 "method": "bdev_nvme_attach_controller" 00:32:28.367 },{ 00:32:28.367 "params": { 00:32:28.367 "name": "Nvme1", 00:32:28.367 "trtype": "tcp", 00:32:28.367 "traddr": "10.0.0.2", 00:32:28.367 "adrfam": "ipv4", 00:32:28.367 "trsvcid": "4420", 00:32:28.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:28.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:28.367 "hdgst": false, 00:32:28.367 "ddgst": false 00:32:28.367 }, 00:32:28.367 "method": "bdev_nvme_attach_controller" 00:32:28.367 },{ 00:32:28.367 "params": { 00:32:28.367 "name": "Nvme2", 00:32:28.367 "trtype": "tcp", 00:32:28.367 "traddr": "10.0.0.2", 00:32:28.367 "adrfam": "ipv4", 00:32:28.367 "trsvcid": "4420", 00:32:28.367 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:28.367 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:28.367 "hdgst": false, 00:32:28.367 "ddgst": false 00:32:28.367 }, 00:32:28.368 "method": "bdev_nvme_attach_controller" 00:32:28.368 }' 00:32:28.368 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:28.368 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:28.368 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:28.368 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:28.368 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:28.368 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:28.368 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:28.368 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:28.368 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:28.368 05:27:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:28.368 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:28.368 ... 00:32:28.368 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:28.368 ... 00:32:28.368 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:28.368 ... 00:32:28.368 fio-3.35 00:32:28.368 Starting 24 threads 00:32:40.774 00:32:40.774 filename0: (groupid=0, jobs=1): err= 0: pid=3837785: Mon Dec 9 05:27:15 2024 00:32:40.774 read: IOPS=559, BW=2240KiB/s (2294kB/s)(21.9MiB/10001msec) 00:32:40.774 slat (nsec): min=7137, max=60944, avg=12603.19, stdev=5072.53 00:32:40.774 clat (usec): min=8825, max=32061, avg=28462.14, stdev=1644.76 00:32:40.774 lat (usec): min=8841, max=32069, avg=28474.75, stdev=1643.73 00:32:40.774 clat percentiles (usec): 00:32:40.774 | 1.00th=[20841], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:32:40.774 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:32:40.774 | 70.00th=[28705], 80.00th=[28967], 90.00th=[28967], 95.00th=[29230], 00:32:40.774 | 99.00th=[29754], 99.50th=[29754], 99.90th=[31851], 99.95th=[31851], 00:32:40.774 | 99.99th=[32113] 00:32:40.774 bw ( KiB/s): min= 2171, max= 2432, per=4.17%, avg=2236.58, stdev=78.36, samples=19 00:32:40.774 iops : min= 542, max= 608, avg=559.11, stdev=19.63, samples=19 00:32:40.774 lat (msec) : 10=0.25%, 20=0.61%, 50=99.14% 00:32:40.774 cpu : usr=98.52%, sys=1.06%, ctx=13, majf=0, minf=9 00:32:40.774 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:40.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.774 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.774 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.774 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.774 filename0: (groupid=0, jobs=1): err= 0: pid=3837786: Mon Dec 9 05:27:15 2024 00:32:40.774 read: IOPS=556, BW=2226KiB/s (2280kB/s)(21.7MiB/10001msec) 00:32:40.774 slat (usec): min=6, max=103, avg=20.42, stdev=21.89 00:32:40.774 clat (usec): min=11406, max=58642, avg=28561.16, stdev=2196.22 00:32:40.774 lat (usec): min=11414, max=58679, avg=28581.58, stdev=2195.58 00:32:40.774 clat percentiles (usec): 00:32:40.774 | 1.00th=[27132], 5.00th=[27657], 10.00th=[28181], 20.00th=[28443], 00:32:40.774 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:32:40.774 | 70.00th=[28705], 80.00th=[28967], 90.00th=[28967], 95.00th=[29230], 00:32:40.774 | 99.00th=[29754], 99.50th=[30278], 99.90th=[58459], 99.95th=[58459], 00:32:40.774 | 99.99th=[58459] 00:32:40.774 bw ( KiB/s): min= 2036, max= 2304, per=4.13%, avg=2215.79, stdev=76.09, samples=19 00:32:40.774 iops : min= 509, max= 576, avg=553.95, stdev=19.02, samples=19 00:32:40.774 lat (msec) : 20=0.70%, 50=99.01%, 100=0.29% 00:32:40.774 cpu : usr=98.57%, sys=1.00%, ctx=12, majf=0, minf=9 00:32:40.774 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:40.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.774 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.774 issued rwts: total=5566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.774 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.774 filename0: (groupid=0, jobs=1): err= 0: pid=3837787: Mon Dec 9 05:27:15 2024 00:32:40.774 read: IOPS=559, BW=2238KiB/s (2292kB/s)(21.9MiB/10011msec) 00:32:40.774 slat (nsec): min=6957, max=65808, avg=25620.06, stdev=12810.48 00:32:40.774 clat (usec): min=11677, max=47642, avg=28360.56, stdev=2770.19 00:32:40.774 lat (usec): min=11696, max=47649, avg=28386.18, stdev=2771.09 00:32:40.774 clat percentiles (usec): 00:32:40.774 | 1.00th=[17433], 5.00th=[27657], 10.00th=[28181], 20.00th=[28181], 00:32:40.774 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:32:40.774 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[29230], 00:32:40.774 | 99.00th=[39060], 99.50th=[44303], 99.90th=[46400], 99.95th=[46400], 00:32:40.774 | 99.99th=[47449] 00:32:40.774 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2230.74, stdev=61.58, samples=19 00:32:40.774 iops : min= 544, max= 576, avg=557.68, stdev=15.39, samples=19 00:32:40.774 lat (msec) : 20=2.03%, 50=97.97% 00:32:40.774 cpu : usr=98.48%, sys=1.10%, ctx=16, majf=0, minf=9 00:32:40.774 IO depths : 1=4.0%, 2=9.9%, 4=23.7%, 8=53.8%, 16=8.6%, 32=0.0%, >=64=0.0% 00:32:40.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.774 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.774 issued rwts: total=5602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.774 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.774 filename0: (groupid=0, jobs=1): err= 0: pid=3837788: Mon Dec 9 05:27:15 2024 00:32:40.774 read: IOPS=558, BW=2233KiB/s (2286kB/s)(21.8MiB/10004msec) 00:32:40.774 slat (nsec): min=7994, max=64507, avg=24879.14, stdev=11051.40 00:32:40.774 clat (usec): min=10989, max=30015, avg=28477.52, stdev=1075.24 00:32:40.774 lat (usec): min=11004, max=30030, avg=28502.40, stdev=1074.59 00:32:40.774 clat percentiles (usec): 00:32:40.774 | 1.00th=[27657], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:32:40.774 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:32:40.774 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[29230], 00:32:40.774 | 99.00th=[29492], 99.50th=[29492], 99.90th=[30016], 99.95th=[30016], 00:32:40.774 | 99.99th=[30016] 00:32:40.774 bw ( KiB/s): min= 2171, max= 2304, per=4.15%, avg=2229.84, stdev=64.99, samples=19 00:32:40.774 iops : min= 542, max= 576, avg=557.42, stdev=16.29, samples=19 00:32:40.774 lat (msec) : 20=0.29%, 50=99.71% 00:32:40.774 cpu : usr=98.60%, sys=1.00%, ctx=14, majf=0, minf=9 00:32:40.774 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:40.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.774 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.774 issued rwts: total=5584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.774 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.774 filename0: (groupid=0, jobs=1): err= 0: pid=3837789: Mon Dec 9 05:27:15 2024 00:32:40.774 read: IOPS=558, BW=2233KiB/s (2286kB/s)(21.8MiB/10004msec) 00:32:40.774 slat (nsec): min=6315, max=71235, avg=34874.26, stdev=14174.60 00:32:40.774 clat (usec): min=3449, max=45824, avg=28366.16, stdev=1910.13 00:32:40.774 lat (usec): min=3456, max=45841, avg=28401.04, stdev=1910.70 00:32:40.774 clat percentiles (usec): 00:32:40.774 | 1.00th=[27395], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:32:40.774 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:32:40.774 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:32:40.774 | 99.00th=[29492], 99.50th=[29754], 99.90th=[45876], 99.95th=[45876], 00:32:40.774 | 99.99th=[45876] 00:32:40.774 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2223.37, stdev=75.94, samples=19 00:32:40.774 iops : min= 513, max= 576, avg=555.84, stdev=18.99, samples=19 00:32:40.774 lat (msec) : 4=0.29%, 20=0.29%, 50=99.43% 00:32:40.774 cpu : usr=98.72%, sys=0.88%, ctx=12, majf=0, minf=9 00:32:40.774 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:40.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.774 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.774 issued rwts: total=5584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.774 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.774 filename0: (groupid=0, jobs=1): err= 0: pid=3837790: Mon Dec 9 05:27:15 2024 00:32:40.774 read: IOPS=556, BW=2226KiB/s (2280kB/s)(21.8MiB/10005msec) 00:32:40.774 slat (nsec): min=6754, max=66397, avg=31665.24, stdev=14338.15 00:32:40.774 clat (usec): min=20499, max=37765, avg=28507.92, stdev=719.39 00:32:40.774 lat (usec): min=20559, max=37789, avg=28539.59, stdev=716.36 00:32:40.774 clat percentiles (usec): 00:32:40.774 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:32:40.774 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:32:40.774 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[29230], 00:32:40.774 | 99.00th=[29754], 99.50th=[30016], 99.90th=[35390], 99.95th=[35914], 00:32:40.775 | 99.99th=[38011] 00:32:40.775 bw ( KiB/s): min= 2171, max= 2304, per=4.14%, avg=2222.63, stdev=63.87, samples=19 00:32:40.775 iops : min= 542, max= 576, avg=555.58, stdev=16.04, samples=19 00:32:40.775 lat (msec) : 50=100.00% 00:32:40.775 cpu : usr=98.32%, sys=1.27%, ctx=18, majf=0, minf=9 00:32:40.775 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:40.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.775 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.775 filename0: (groupid=0, jobs=1): err= 0: pid=3837791: Mon Dec 9 05:27:15 2024 00:32:40.775 read: IOPS=567, BW=2270KiB/s (2324kB/s)(22.2MiB/10011msec) 00:32:40.775 slat (nsec): min=5115, max=65468, avg=21922.59, stdev=12231.58 00:32:40.775 clat (usec): min=7565, max=54894, avg=28019.94, stdev=2966.61 00:32:40.775 lat (usec): min=7579, max=54908, avg=28041.86, stdev=2968.22 00:32:40.775 clat percentiles (usec): 00:32:40.775 | 1.00th=[16450], 5.00th=[21103], 10.00th=[27657], 20.00th=[28181], 00:32:40.775 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:32:40.775 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[29230], 00:32:40.775 | 99.00th=[36963], 99.50th=[39584], 99.90th=[47973], 99.95th=[47973], 00:32:40.775 | 99.99th=[54789] 00:32:40.775 bw ( KiB/s): min= 2064, max= 2544, per=4.22%, avg=2263.58, stdev=110.13, samples=19 00:32:40.775 iops : min= 516, max= 636, avg=565.89, stdev=27.53, samples=19 00:32:40.775 lat (msec) : 10=0.21%, 20=3.26%, 50=96.50%, 100=0.04% 00:32:40.775 cpu : usr=98.36%, sys=1.24%, ctx=14, majf=0, minf=9 00:32:40.775 IO depths : 1=3.2%, 2=6.9%, 4=15.1%, 8=63.5%, 16=11.2%, 32=0.0%, >=64=0.0% 00:32:40.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.775 complete : 0=0.0%, 4=92.0%, 8=4.2%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.775 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.775 filename0: (groupid=0, jobs=1): err= 0: pid=3837792: Mon Dec 9 05:27:15 2024 00:32:40.775 read: IOPS=556, BW=2224KiB/s (2277kB/s)(21.8MiB/10014msec) 00:32:40.775 slat (nsec): min=7493, max=70434, avg=32474.99, stdev=12707.45 00:32:40.775 clat (usec): min=20430, max=44916, avg=28476.72, stdev=1030.75 00:32:40.775 lat (usec): min=20464, max=44941, avg=28509.20, stdev=1030.09 00:32:40.775 clat percentiles (usec): 00:32:40.775 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:32:40.775 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:32:40.775 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28967], 95.00th=[29230], 00:32:40.775 | 99.00th=[29492], 99.50th=[29754], 99.90th=[44827], 99.95th=[44827], 00:32:40.775 | 99.99th=[44827] 00:32:40.775 bw ( KiB/s): min= 2048, max= 2308, per=4.14%, avg=2224.63, stdev=76.26, samples=19 00:32:40.775 iops : min= 512, max= 577, avg=556.16, stdev=19.06, samples=19 00:32:40.775 lat (msec) : 50=100.00% 00:32:40.775 cpu : usr=97.55%, sys=1.29%, ctx=237, majf=0, minf=9 00:32:40.775 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:40.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.775 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.775 filename1: (groupid=0, jobs=1): err= 0: pid=3837793: Mon Dec 9 05:27:15 2024 00:32:40.775 read: IOPS=559, BW=2239KiB/s (2293kB/s)(21.9MiB/10004msec) 00:32:40.775 slat (nsec): min=7312, max=65491, avg=21413.36, stdev=10678.05 00:32:40.775 clat (usec): min=8878, max=29862, avg=28417.75, stdev=1615.01 00:32:40.775 lat (usec): min=8894, max=29875, avg=28439.16, stdev=1614.06 00:32:40.775 clat percentiles (usec): 00:32:40.775 | 1.00th=[20841], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:32:40.775 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:32:40.775 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[29230], 00:32:40.775 | 99.00th=[29492], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:32:40.775 | 99.99th=[29754] 00:32:40.775 bw ( KiB/s): min= 2171, max= 2432, per=4.17%, avg=2236.58, stdev=78.36, samples=19 00:32:40.775 iops : min= 542, max= 608, avg=559.11, stdev=19.63, samples=19 00:32:40.775 lat (msec) : 10=0.25%, 20=0.64%, 50=99.11% 00:32:40.775 cpu : usr=98.46%, sys=1.13%, ctx=6, majf=0, minf=9 00:32:40.775 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:40.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.775 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.775 filename1: (groupid=0, jobs=1): err= 0: pid=3837794: Mon Dec 9 05:27:15 2024 00:32:40.775 read: IOPS=556, BW=2226KiB/s (2279kB/s)(21.8MiB/10006msec) 00:32:40.775 slat (nsec): min=6388, max=63931, avg=31560.55, stdev=9727.63 00:32:40.775 clat (usec): min=19954, max=44942, avg=28484.75, stdev=819.16 00:32:40.775 lat (usec): min=19976, max=44960, avg=28516.32, stdev=818.50 00:32:40.775 clat percentiles (usec): 00:32:40.775 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:32:40.775 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:32:40.775 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:32:40.775 | 99.00th=[29492], 99.50th=[34341], 99.90th=[35390], 99.95th=[44827], 00:32:40.775 | 99.99th=[44827] 00:32:40.775 bw ( KiB/s): min= 2171, max= 2304, per=4.14%, avg=2222.63, stdev=63.31, samples=19 00:32:40.775 iops : min= 542, max= 576, avg=555.58, stdev=15.81, samples=19 00:32:40.775 lat (msec) : 20=0.04%, 50=99.96% 00:32:40.775 cpu : usr=98.47%, sys=1.13%, ctx=12, majf=0, minf=9 00:32:40.775 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:40.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.775 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.775 filename1: (groupid=0, jobs=1): err= 0: pid=3837795: Mon Dec 9 05:27:15 2024 00:32:40.775 read: IOPS=558, BW=2233KiB/s (2286kB/s)(21.8MiB/10004msec) 00:32:40.775 slat (nsec): min=11772, max=59296, avg=29761.65, stdev=9977.91 00:32:40.775 clat (usec): min=10991, max=29896, avg=28425.45, stdev=1075.15 00:32:40.775 lat (usec): min=11009, max=29926, avg=28455.21, stdev=1075.14 00:32:40.775 clat percentiles (usec): 00:32:40.775 | 1.00th=[27395], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:32:40.775 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:32:40.775 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:32:40.775 | 99.00th=[29492], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:32:40.775 | 99.99th=[30016] 00:32:40.775 bw ( KiB/s): min= 2171, max= 2304, per=4.15%, avg=2229.84, stdev=64.99, samples=19 00:32:40.775 iops : min= 542, max= 576, avg=557.42, stdev=16.29, samples=19 00:32:40.775 lat (msec) : 20=0.29%, 50=99.71% 00:32:40.775 cpu : usr=98.17%, sys=1.43%, ctx=12, majf=0, minf=9 00:32:40.775 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:40.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.775 issued rwts: total=5584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.775 filename1: (groupid=0, jobs=1): err= 0: pid=3837796: Mon Dec 9 05:27:15 2024 00:32:40.775 read: IOPS=557, BW=2230KiB/s (2284kB/s)(21.8MiB/10016msec) 00:32:40.775 slat (nsec): min=8099, max=58013, avg=28687.29, stdev=8597.11 00:32:40.775 clat (usec): min=20104, max=29985, avg=28463.05, stdev=685.21 00:32:40.775 lat (usec): min=20126, max=30006, avg=28491.74, stdev=685.20 00:32:40.775 clat percentiles (usec): 00:32:40.775 | 1.00th=[27657], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:32:40.775 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:32:40.775 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[29230], 00:32:40.775 | 99.00th=[29492], 99.50th=[29492], 99.90th=[29754], 99.95th=[30016], 00:32:40.775 | 99.99th=[30016] 00:32:40.775 bw ( KiB/s): min= 2176, max= 2308, per=4.14%, avg=2224.63, stdev=63.21, samples=19 00:32:40.775 iops : min= 544, max= 577, avg=556.16, stdev=15.80, samples=19 00:32:40.775 lat (msec) : 50=100.00% 00:32:40.775 cpu : usr=97.84%, sys=1.75%, ctx=19, majf=0, minf=9 00:32:40.775 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:40.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.775 issued rwts: total=5584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.775 filename1: (groupid=0, jobs=1): err= 0: pid=3837797: Mon Dec 9 05:27:15 2024 00:32:40.775 read: IOPS=557, BW=2229KiB/s (2282kB/s)(21.8MiB/10003msec) 00:32:40.775 slat (nsec): min=7034, max=72332, avg=31461.98, stdev=10081.45 00:32:40.775 clat (usec): min=19891, max=41365, avg=28421.89, stdev=898.20 00:32:40.775 lat (usec): min=19904, max=41382, avg=28453.35, stdev=899.33 00:32:40.775 clat percentiles (usec): 00:32:40.775 | 1.00th=[27132], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:32:40.775 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:32:40.775 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:32:40.775 | 99.00th=[29492], 99.50th=[31851], 99.90th=[38536], 99.95th=[38536], 00:32:40.775 | 99.99th=[41157] 00:32:40.775 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2225.68, stdev=62.40, samples=19 00:32:40.775 iops : min= 544, max= 576, avg=556.42, stdev=15.60, samples=19 00:32:40.775 lat (msec) : 20=0.09%, 50=99.91% 00:32:40.775 cpu : usr=98.53%, sys=1.07%, ctx=11, majf=0, minf=9 00:32:40.775 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:40.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.775 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.775 issued rwts: total=5574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.775 filename1: (groupid=0, jobs=1): err= 0: pid=3837798: Mon Dec 9 05:27:15 2024 00:32:40.776 read: IOPS=556, BW=2227KiB/s (2280kB/s)(21.8MiB/10001msec) 00:32:40.776 slat (nsec): min=6824, max=66698, avg=30757.75, stdev=10815.87 00:32:40.776 clat (usec): min=12276, max=59209, avg=28434.80, stdev=1612.17 00:32:40.776 lat (usec): min=12301, max=59228, avg=28465.56, stdev=1612.41 00:32:40.776 clat percentiles (usec): 00:32:40.776 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:32:40.776 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:32:40.776 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:32:40.776 | 99.00th=[29492], 99.50th=[29754], 99.90th=[50070], 99.95th=[50594], 00:32:40.776 | 99.99th=[58983] 00:32:40.776 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2223.16, stdev=76.45, samples=19 00:32:40.776 iops : min= 512, max= 576, avg=555.79, stdev=19.11, samples=19 00:32:40.776 lat (msec) : 20=0.38%, 50=99.34%, 100=0.29% 00:32:40.776 cpu : usr=98.47%, sys=1.12%, ctx=13, majf=0, minf=9 00:32:40.776 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:40.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.776 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.776 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.776 filename1: (groupid=0, jobs=1): err= 0: pid=3837799: Mon Dec 9 05:27:15 2024 00:32:40.776 read: IOPS=556, BW=2227KiB/s (2280kB/s)(21.8MiB/10002msec) 00:32:40.776 slat (nsec): min=6748, max=64344, avg=30125.37, stdev=10618.49 00:32:40.776 clat (usec): min=12293, max=51299, avg=28442.33, stdev=1588.26 00:32:40.776 lat (usec): min=12306, max=51318, avg=28472.46, stdev=1588.52 00:32:40.776 clat percentiles (usec): 00:32:40.776 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:32:40.776 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:32:40.776 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:32:40.776 | 99.00th=[29492], 99.50th=[29492], 99.90th=[51119], 99.95th=[51119], 00:32:40.776 | 99.99th=[51119] 00:32:40.776 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2223.37, stdev=75.94, samples=19 00:32:40.776 iops : min= 513, max= 576, avg=555.84, stdev=18.99, samples=19 00:32:40.776 lat (msec) : 20=0.34%, 50=99.37%, 100=0.29% 00:32:40.776 cpu : usr=98.49%, sys=1.10%, ctx=11, majf=0, minf=9 00:32:40.776 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:40.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.776 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.776 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.776 filename1: (groupid=0, jobs=1): err= 0: pid=3837800: Mon Dec 9 05:27:15 2024 00:32:40.776 read: IOPS=568, BW=2274KiB/s (2329kB/s)(22.2MiB/10012msec) 00:32:40.776 slat (nsec): min=6907, max=59749, avg=14150.80, stdev=10337.68 00:32:40.776 clat (usec): min=7659, max=58472, avg=28080.00, stdev=4248.50 00:32:40.776 lat (usec): min=7675, max=58484, avg=28094.15, stdev=4248.55 00:32:40.776 clat percentiles (usec): 00:32:40.776 | 1.00th=[16319], 5.00th=[19792], 10.00th=[21627], 20.00th=[26870], 00:32:40.776 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:32:40.776 | 70.00th=[28705], 80.00th=[29230], 90.00th=[32637], 95.00th=[36439], 00:32:40.776 | 99.00th=[38011], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:32:40.776 | 99.99th=[58459] 00:32:40.776 bw ( KiB/s): min= 2096, max= 2416, per=4.23%, avg=2269.47, stdev=74.72, samples=19 00:32:40.776 iops : min= 524, max= 604, avg=567.37, stdev=18.68, samples=19 00:32:40.776 lat (msec) : 10=0.25%, 20=5.83%, 50=93.89%, 100=0.04% 00:32:40.776 cpu : usr=98.52%, sys=1.08%, ctx=14, majf=0, minf=9 00:32:40.776 IO depths : 1=0.1%, 2=0.3%, 4=3.0%, 8=80.4%, 16=16.3%, 32=0.0%, >=64=0.0% 00:32:40.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.776 complete : 0=0.0%, 4=89.1%, 8=9.0%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.776 issued rwts: total=5692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.776 filename2: (groupid=0, jobs=1): err= 0: pid=3837801: Mon Dec 9 05:27:15 2024 00:32:40.776 read: IOPS=556, BW=2227KiB/s (2280kB/s)(21.8MiB/10001msec) 00:32:40.776 slat (nsec): min=7326, max=71348, avg=33734.14, stdev=14480.84 00:32:40.776 clat (usec): min=20608, max=32656, avg=28482.07, stdev=587.83 00:32:40.776 lat (usec): min=20662, max=32673, avg=28515.81, stdev=584.95 00:32:40.776 clat percentiles (usec): 00:32:40.776 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:32:40.776 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:32:40.776 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[29230], 00:32:40.776 | 99.00th=[29492], 99.50th=[29754], 99.90th=[32637], 99.95th=[32637], 00:32:40.776 | 99.99th=[32637] 00:32:40.776 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2223.16, stdev=63.44, samples=19 00:32:40.776 iops : min= 544, max= 576, avg=555.79, stdev=15.86, samples=19 00:32:40.776 lat (msec) : 50=100.00% 00:32:40.776 cpu : usr=98.53%, sys=1.08%, ctx=7, majf=0, minf=9 00:32:40.776 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:40.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.776 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.776 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.776 filename2: (groupid=0, jobs=1): err= 0: pid=3837802: Mon Dec 9 05:27:15 2024 00:32:40.776 read: IOPS=556, BW=2227KiB/s (2280kB/s)(21.8MiB/10002msec) 00:32:40.776 slat (nsec): min=9442, max=62533, avg=29736.86, stdev=9869.15 00:32:40.776 clat (usec): min=12299, max=51291, avg=28451.11, stdev=1587.39 00:32:40.776 lat (usec): min=12312, max=51308, avg=28480.84, stdev=1587.55 00:32:40.776 clat percentiles (usec): 00:32:40.776 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:32:40.776 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:32:40.776 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:32:40.776 | 99.00th=[29492], 99.50th=[29754], 99.90th=[51119], 99.95th=[51119], 00:32:40.776 | 99.99th=[51119] 00:32:40.776 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2223.37, stdev=75.94, samples=19 00:32:40.776 iops : min= 513, max= 576, avg=555.84, stdev=18.99, samples=19 00:32:40.776 lat (msec) : 20=0.29%, 50=99.43%, 100=0.29% 00:32:40.776 cpu : usr=98.53%, sys=1.06%, ctx=15, majf=0, minf=9 00:32:40.776 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:40.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.776 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.776 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.776 filename2: (groupid=0, jobs=1): err= 0: pid=3837803: Mon Dec 9 05:27:15 2024 00:32:40.776 read: IOPS=558, BW=2233KiB/s (2286kB/s)(21.8MiB/10004msec) 00:32:40.776 slat (nsec): min=7773, max=54001, avg=28233.53, stdev=7527.70 00:32:40.776 clat (usec): min=13117, max=36789, avg=28424.67, stdev=997.82 00:32:40.776 lat (usec): min=13126, max=36809, avg=28452.90, stdev=998.76 00:32:40.776 clat percentiles (usec): 00:32:40.776 | 1.00th=[27395], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:32:40.776 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:32:40.776 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:32:40.776 | 99.00th=[29492], 99.50th=[29492], 99.90th=[30016], 99.95th=[30016], 00:32:40.776 | 99.99th=[36963] 00:32:40.776 bw ( KiB/s): min= 2171, max= 2304, per=4.15%, avg=2229.84, stdev=64.99, samples=19 00:32:40.776 iops : min= 542, max= 576, avg=557.42, stdev=16.29, samples=19 00:32:40.776 lat (msec) : 20=0.32%, 50=99.68% 00:32:40.776 cpu : usr=98.23%, sys=1.37%, ctx=23, majf=0, minf=9 00:32:40.776 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:40.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.776 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.776 issued rwts: total=5584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.776 filename2: (groupid=0, jobs=1): err= 0: pid=3837804: Mon Dec 9 05:27:15 2024 00:32:40.776 read: IOPS=556, BW=2227KiB/s (2280kB/s)(21.8MiB/10001msec) 00:32:40.776 slat (nsec): min=6393, max=63992, avg=30973.54, stdev=9783.83 00:32:40.776 clat (usec): min=12279, max=67424, avg=28447.94, stdev=1735.64 00:32:40.776 lat (usec): min=12309, max=67440, avg=28478.92, stdev=1735.84 00:32:40.776 clat percentiles (usec): 00:32:40.776 | 1.00th=[27395], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:32:40.776 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:32:40.776 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:32:40.776 | 99.00th=[29492], 99.50th=[30016], 99.90th=[50594], 99.95th=[50594], 00:32:40.776 | 99.99th=[67634] 00:32:40.776 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2223.16, stdev=76.45, samples=19 00:32:40.776 iops : min= 512, max= 576, avg=555.79, stdev=19.11, samples=19 00:32:40.776 lat (msec) : 20=0.32%, 50=99.39%, 100=0.29% 00:32:40.776 cpu : usr=98.56%, sys=1.04%, ctx=10, majf=0, minf=9 00:32:40.776 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:40.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.776 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.776 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.776 filename2: (groupid=0, jobs=1): err= 0: pid=3837805: Mon Dec 9 05:27:15 2024 00:32:40.776 read: IOPS=585, BW=2340KiB/s (2396kB/s)(22.9MiB/10003msec) 00:32:40.776 slat (nsec): min=4249, max=62564, avg=11407.61, stdev=4077.44 00:32:40.776 clat (usec): min=1311, max=37240, avg=27252.37, stdev=4669.40 00:32:40.776 lat (usec): min=1320, max=37247, avg=27263.77, stdev=4670.03 00:32:40.776 clat percentiles (usec): 00:32:40.776 | 1.00th=[ 2835], 5.00th=[16581], 10.00th=[23987], 20.00th=[28443], 00:32:40.776 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:32:40.776 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[29230], 00:32:40.776 | 99.00th=[29754], 99.50th=[30016], 99.90th=[31327], 99.95th=[37487], 00:32:40.776 | 99.99th=[37487] 00:32:40.777 bw ( KiB/s): min= 2176, max= 3072, per=4.36%, avg=2342.47, stdev=278.72, samples=19 00:32:40.777 iops : min= 544, max= 768, avg=585.58, stdev=69.69, samples=19 00:32:40.777 lat (msec) : 2=0.82%, 4=0.27%, 10=1.88%, 20=4.00%, 50=93.03% 00:32:40.777 cpu : usr=98.31%, sys=1.28%, ctx=14, majf=0, minf=9 00:32:40.777 IO depths : 1=5.3%, 2=10.7%, 4=22.3%, 8=54.3%, 16=7.3%, 32=0.0%, >=64=0.0% 00:32:40.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.777 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.777 issued rwts: total=5852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.777 filename2: (groupid=0, jobs=1): err= 0: pid=3837806: Mon Dec 9 05:27:15 2024 00:32:40.777 read: IOPS=559, BW=2240KiB/s (2293kB/s)(21.9MiB/10002msec) 00:32:40.777 slat (nsec): min=7257, max=84405, avg=16966.78, stdev=9841.87 00:32:40.777 clat (usec): min=8884, max=32143, avg=28434.30, stdev=1648.60 00:32:40.777 lat (usec): min=8900, max=32151, avg=28451.26, stdev=1647.24 00:32:40.777 clat percentiles (usec): 00:32:40.777 | 1.00th=[20841], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:32:40.777 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:32:40.777 | 70.00th=[28705], 80.00th=[28967], 90.00th=[28967], 95.00th=[29230], 00:32:40.777 | 99.00th=[29492], 99.50th=[29754], 99.90th=[32113], 99.95th=[32113], 00:32:40.777 | 99.99th=[32113] 00:32:40.777 bw ( KiB/s): min= 2171, max= 2432, per=4.17%, avg=2236.58, stdev=78.36, samples=19 00:32:40.777 iops : min= 542, max= 608, avg=559.11, stdev=19.63, samples=19 00:32:40.777 lat (msec) : 10=0.29%, 20=0.57%, 50=99.14% 00:32:40.777 cpu : usr=98.51%, sys=1.07%, ctx=17, majf=0, minf=9 00:32:40.777 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:40.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.777 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.777 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.777 filename2: (groupid=0, jobs=1): err= 0: pid=3837807: Mon Dec 9 05:27:15 2024 00:32:40.777 read: IOPS=559, BW=2239KiB/s (2293kB/s)(21.9MiB/10004msec) 00:32:40.777 slat (nsec): min=7868, max=89138, avg=22115.49, stdev=11683.99 00:32:40.777 clat (usec): min=8928, max=37099, avg=28407.77, stdev=1664.39 00:32:40.777 lat (usec): min=8944, max=37114, avg=28429.88, stdev=1663.19 00:32:40.777 clat percentiles (usec): 00:32:40.777 | 1.00th=[20579], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:32:40.777 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:32:40.777 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[29230], 00:32:40.777 | 99.00th=[29492], 99.50th=[29492], 99.90th=[36439], 99.95th=[36963], 00:32:40.777 | 99.99th=[36963] 00:32:40.777 bw ( KiB/s): min= 2171, max= 2432, per=4.17%, avg=2236.58, stdev=78.36, samples=19 00:32:40.777 iops : min= 542, max= 608, avg=559.11, stdev=19.63, samples=19 00:32:40.777 lat (msec) : 10=0.29%, 20=0.68%, 50=99.04% 00:32:40.777 cpu : usr=97.73%, sys=1.81%, ctx=47, majf=0, minf=9 00:32:40.777 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:40.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.777 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.777 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.777 filename2: (groupid=0, jobs=1): err= 0: pid=3837808: Mon Dec 9 05:27:15 2024 00:32:40.777 read: IOPS=556, BW=2227KiB/s (2280kB/s)(21.8MiB/10002msec) 00:32:40.777 slat (usec): min=4, max=103, avg=33.44, stdev=21.21 00:32:40.777 clat (usec): min=12354, max=47867, avg=28478.67, stdev=1472.54 00:32:40.777 lat (usec): min=12366, max=47880, avg=28512.12, stdev=1469.78 00:32:40.777 clat percentiles (usec): 00:32:40.777 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27919], 20.00th=[28181], 00:32:40.777 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:32:40.777 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[29230], 00:32:40.777 | 99.00th=[29754], 99.50th=[30016], 99.90th=[47973], 99.95th=[47973], 00:32:40.777 | 99.99th=[47973] 00:32:40.777 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2223.16, stdev=76.45, samples=19 00:32:40.777 iops : min= 512, max= 576, avg=555.79, stdev=19.11, samples=19 00:32:40.777 lat (msec) : 20=0.29%, 50=99.71% 00:32:40.777 cpu : usr=98.51%, sys=1.07%, ctx=14, majf=0, minf=9 00:32:40.777 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:40.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.777 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.777 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:40.777 00:32:40.777 Run status group 0 (all jobs): 00:32:40.777 READ: bw=52.4MiB/s (55.0MB/s), 2224KiB/s-2340KiB/s (2277kB/s-2396kB/s), io=525MiB (550MB), run=10001-10016msec 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:40.777 bdev_null0 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:40.777 [2024-12-09 05:27:15.878739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.777 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:40.778 bdev_null1 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:40.778 { 00:32:40.778 "params": { 00:32:40.778 "name": "Nvme$subsystem", 00:32:40.778 "trtype": "$TEST_TRANSPORT", 00:32:40.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:40.778 "adrfam": "ipv4", 00:32:40.778 "trsvcid": "$NVMF_PORT", 00:32:40.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:40.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:40.778 "hdgst": ${hdgst:-false}, 00:32:40.778 "ddgst": ${ddgst:-false} 00:32:40.778 }, 00:32:40.778 "method": "bdev_nvme_attach_controller" 00:32:40.778 } 00:32:40.778 EOF 00:32:40.778 )") 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:40.778 { 00:32:40.778 "params": { 00:32:40.778 "name": "Nvme$subsystem", 00:32:40.778 "trtype": "$TEST_TRANSPORT", 00:32:40.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:40.778 "adrfam": "ipv4", 00:32:40.778 "trsvcid": "$NVMF_PORT", 00:32:40.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:40.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:40.778 "hdgst": ${hdgst:-false}, 00:32:40.778 "ddgst": ${ddgst:-false} 00:32:40.778 }, 00:32:40.778 "method": "bdev_nvme_attach_controller" 00:32:40.778 } 00:32:40.778 EOF 00:32:40.778 )") 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:40.778 "params": { 00:32:40.778 "name": "Nvme0", 00:32:40.778 "trtype": "tcp", 00:32:40.778 "traddr": "10.0.0.2", 00:32:40.778 "adrfam": "ipv4", 00:32:40.778 "trsvcid": "4420", 00:32:40.778 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:40.778 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:40.778 "hdgst": false, 00:32:40.778 "ddgst": false 00:32:40.778 }, 00:32:40.778 "method": "bdev_nvme_attach_controller" 00:32:40.778 },{ 00:32:40.778 "params": { 00:32:40.778 "name": "Nvme1", 00:32:40.778 "trtype": "tcp", 00:32:40.778 "traddr": "10.0.0.2", 00:32:40.778 "adrfam": "ipv4", 00:32:40.778 "trsvcid": "4420", 00:32:40.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:40.778 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:40.778 "hdgst": false, 00:32:40.778 "ddgst": false 00:32:40.778 }, 00:32:40.778 "method": "bdev_nvme_attach_controller" 00:32:40.778 }' 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:40.778 05:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:40.778 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:40.778 ... 00:32:40.778 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:40.778 ... 00:32:40.778 fio-3.35 00:32:40.778 Starting 4 threads 00:32:46.057 00:32:46.057 filename0: (groupid=0, jobs=1): err= 0: pid=3840153: Mon Dec 9 05:27:22 2024 00:32:46.057 read: IOPS=2629, BW=20.5MiB/s (21.5MB/s)(103MiB/5002msec) 00:32:46.057 slat (nsec): min=4287, max=44912, avg=9435.92, stdev=3210.49 00:32:46.057 clat (usec): min=601, max=6464, avg=3013.75, stdev=548.38 00:32:46.057 lat (usec): min=613, max=6475, avg=3023.18, stdev=548.42 00:32:46.057 clat percentiles (usec): 00:32:46.057 | 1.00th=[ 1860], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2606], 00:32:46.057 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 3032], 60.00th=[ 3097], 00:32:46.057 | 70.00th=[ 3163], 80.00th=[ 3228], 90.00th=[ 3589], 95.00th=[ 4146], 00:32:46.057 | 99.00th=[ 4817], 99.50th=[ 5080], 99.90th=[ 5473], 99.95th=[ 5604], 00:32:46.057 | 99.99th=[ 6456] 00:32:46.057 bw ( KiB/s): min=20272, max=23024, per=25.88%, avg=21033.60, stdev=821.65, samples=10 00:32:46.057 iops : min= 2534, max= 2878, avg=2629.20, stdev=102.71, samples=10 00:32:46.057 lat (usec) : 750=0.01%, 1000=0.01% 00:32:46.057 lat (msec) : 2=1.85%, 4=91.90%, 10=6.23% 00:32:46.057 cpu : usr=95.66%, sys=4.00%, ctx=14, majf=0, minf=9 00:32:46.057 IO depths : 1=0.4%, 2=6.1%, 4=65.7%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:46.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.057 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.057 issued rwts: total=13154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.057 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:46.057 filename0: (groupid=0, jobs=1): err= 0: pid=3840154: Mon Dec 9 05:27:22 2024 00:32:46.057 read: IOPS=2532, BW=19.8MiB/s (20.7MB/s)(99.0MiB/5002msec) 00:32:46.057 slat (nsec): min=6294, max=79455, avg=9567.08, stdev=3444.62 00:32:46.057 clat (usec): min=610, max=6002, avg=3131.19, stdev=562.21 00:32:46.057 lat (usec): min=622, max=6013, avg=3140.75, stdev=562.05 00:32:46.057 clat percentiles (usec): 00:32:46.057 | 1.00th=[ 2040], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2769], 00:32:46.057 | 30.00th=[ 2868], 40.00th=[ 2999], 50.00th=[ 3097], 60.00th=[ 3163], 00:32:46.057 | 70.00th=[ 3195], 80.00th=[ 3326], 90.00th=[ 3884], 95.00th=[ 4424], 00:32:46.057 | 99.00th=[ 5014], 99.50th=[ 5211], 99.90th=[ 5538], 99.95th=[ 5866], 00:32:46.057 | 99.99th=[ 5997] 00:32:46.057 bw ( KiB/s): min=19536, max=20832, per=25.02%, avg=20328.89, stdev=414.85, samples=9 00:32:46.057 iops : min= 2442, max= 2604, avg=2541.11, stdev=51.86, samples=9 00:32:46.057 lat (usec) : 750=0.01% 00:32:46.057 lat (msec) : 2=0.73%, 4=90.54%, 10=8.72% 00:32:46.057 cpu : usr=95.40%, sys=4.24%, ctx=15, majf=0, minf=9 00:32:46.057 IO depths : 1=0.3%, 2=3.0%, 4=69.1%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:46.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.057 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.057 issued rwts: total=12667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.057 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:46.057 filename1: (groupid=0, jobs=1): err= 0: pid=3840155: Mon Dec 9 05:27:22 2024 00:32:46.057 read: IOPS=2477, BW=19.4MiB/s (20.3MB/s)(96.8MiB/5002msec) 00:32:46.057 slat (nsec): min=6292, max=64791, avg=9461.29, stdev=3470.67 00:32:46.057 clat (usec): min=717, max=6613, avg=3201.73, stdev=568.08 00:32:46.057 lat (usec): min=727, max=6619, avg=3211.19, stdev=567.84 00:32:46.057 clat percentiles (usec): 00:32:46.057 | 1.00th=[ 2114], 5.00th=[ 2507], 10.00th=[ 2671], 20.00th=[ 2835], 00:32:46.057 | 30.00th=[ 2933], 40.00th=[ 3064], 50.00th=[ 3130], 60.00th=[ 3163], 00:32:46.057 | 70.00th=[ 3261], 80.00th=[ 3392], 90.00th=[ 3982], 95.00th=[ 4490], 00:32:46.057 | 99.00th=[ 5080], 99.50th=[ 5276], 99.90th=[ 5800], 99.95th=[ 5866], 00:32:46.057 | 99.99th=[ 6587] 00:32:46.058 bw ( KiB/s): min=19280, max=20352, per=24.39%, avg=19821.40, stdev=451.45, samples=10 00:32:46.058 iops : min= 2410, max= 2544, avg=2477.60, stdev=56.53, samples=10 00:32:46.058 lat (usec) : 750=0.01%, 1000=0.01% 00:32:46.058 lat (msec) : 2=0.54%, 4=89.68%, 10=9.77% 00:32:46.058 cpu : usr=95.40%, sys=4.24%, ctx=10, majf=0, minf=9 00:32:46.058 IO depths : 1=0.4%, 2=3.1%, 4=68.7%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:46.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.058 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.058 issued rwts: total=12391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.058 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:46.058 filename1: (groupid=0, jobs=1): err= 0: pid=3840156: Mon Dec 9 05:27:22 2024 00:32:46.058 read: IOPS=2518, BW=19.7MiB/s (20.6MB/s)(98.4MiB/5001msec) 00:32:46.058 slat (nsec): min=6230, max=73025, avg=9610.87, stdev=3443.91 00:32:46.058 clat (usec): min=965, max=5860, avg=3146.97, stdev=537.65 00:32:46.058 lat (usec): min=972, max=5872, avg=3156.58, stdev=537.55 00:32:46.058 clat percentiles (usec): 00:32:46.058 | 1.00th=[ 1958], 5.00th=[ 2376], 10.00th=[ 2573], 20.00th=[ 2802], 00:32:46.058 | 30.00th=[ 2900], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3163], 00:32:46.058 | 70.00th=[ 3228], 80.00th=[ 3392], 90.00th=[ 3785], 95.00th=[ 4228], 00:32:46.058 | 99.00th=[ 4948], 99.50th=[ 5080], 99.90th=[ 5407], 99.95th=[ 5538], 00:32:46.058 | 99.99th=[ 5735] 00:32:46.058 bw ( KiB/s): min=19360, max=20848, per=24.77%, avg=20131.56, stdev=498.11, samples=9 00:32:46.058 iops : min= 2420, max= 2606, avg=2516.44, stdev=62.26, samples=9 00:32:46.058 lat (usec) : 1000=0.06% 00:32:46.058 lat (msec) : 2=1.14%, 4=91.42%, 10=7.38% 00:32:46.058 cpu : usr=95.48%, sys=4.16%, ctx=13, majf=0, minf=9 00:32:46.058 IO depths : 1=0.1%, 2=7.1%, 4=64.2%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:46.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.058 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.058 issued rwts: total=12596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.058 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:46.058 00:32:46.058 Run status group 0 (all jobs): 00:32:46.058 READ: bw=79.4MiB/s (83.2MB/s), 19.4MiB/s-20.5MiB/s (20.3MB/s-21.5MB/s), io=397MiB (416MB), run=5001-5002msec 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.058 00:32:46.058 real 0m24.463s 00:32:46.058 user 4m50.979s 00:32:46.058 sys 0m5.609s 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:46.058 05:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.058 ************************************ 00:32:46.058 END TEST fio_dif_rand_params 00:32:46.058 ************************************ 00:32:46.058 05:27:22 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:46.058 05:27:22 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:46.058 05:27:22 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:46.058 05:27:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:46.058 ************************************ 00:32:46.058 START TEST fio_dif_digest 00:32:46.058 ************************************ 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:46.058 bdev_null0 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:46.058 [2024-12-09 05:27:22.369087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:46.058 { 00:32:46.058 "params": { 00:32:46.058 "name": "Nvme$subsystem", 00:32:46.058 "trtype": "$TEST_TRANSPORT", 00:32:46.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:46.058 "adrfam": "ipv4", 00:32:46.058 "trsvcid": "$NVMF_PORT", 00:32:46.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:46.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:46.058 "hdgst": ${hdgst:-false}, 00:32:46.058 "ddgst": ${ddgst:-false} 00:32:46.058 }, 00:32:46.058 "method": "bdev_nvme_attach_controller" 00:32:46.058 } 00:32:46.058 EOF 00:32:46.058 )") 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:32:46.058 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:46.059 05:27:22 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:46.059 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:46.059 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:32:46.059 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:46.059 05:27:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:32:46.059 05:27:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:32:46.059 05:27:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:46.059 "params": { 00:32:46.059 "name": "Nvme0", 00:32:46.059 "trtype": "tcp", 00:32:46.059 "traddr": "10.0.0.2", 00:32:46.059 "adrfam": "ipv4", 00:32:46.059 "trsvcid": "4420", 00:32:46.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:46.059 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:46.059 "hdgst": true, 00:32:46.059 "ddgst": true 00:32:46.059 }, 00:32:46.059 "method": "bdev_nvme_attach_controller" 00:32:46.059 }' 00:32:46.059 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:46.059 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:46.059 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:46.059 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:46.059 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:46.059 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:46.059 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:46.059 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:46.059 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:46.059 05:27:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:46.317 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:46.317 ... 00:32:46.317 fio-3.35 00:32:46.317 Starting 3 threads 00:32:58.579 00:32:58.579 filename0: (groupid=0, jobs=1): err= 0: pid=3841209: Mon Dec 9 05:27:33 2024 00:32:58.579 read: IOPS=272, BW=34.0MiB/s (35.7MB/s)(342MiB/10047msec) 00:32:58.579 slat (nsec): min=6508, max=45516, avg=14419.18, stdev=6388.78 00:32:58.579 clat (usec): min=7836, max=51814, avg=10995.74, stdev=1311.57 00:32:58.579 lat (usec): min=7848, max=51843, avg=11010.16, stdev=1311.70 00:32:58.579 clat percentiles (usec): 00:32:58.579 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10290], 00:32:58.579 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:32:58.579 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:32:58.579 | 99.00th=[12911], 99.50th=[13042], 99.90th=[15664], 99.95th=[47449], 00:32:58.579 | 99.99th=[51643] 00:32:58.579 bw ( KiB/s): min=34304, max=35840, per=34.28%, avg=34956.80, stdev=515.19, samples=20 00:32:58.579 iops : min= 268, max= 280, avg=273.10, stdev= 4.02, samples=20 00:32:58.579 lat (msec) : 10=10.65%, 20=89.28%, 50=0.04%, 100=0.04% 00:32:58.579 cpu : usr=94.29%, sys=5.40%, ctx=21, majf=0, minf=69 00:32:58.579 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:58.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.579 issued rwts: total=2733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.579 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:58.579 filename0: (groupid=0, jobs=1): err= 0: pid=3841210: Mon Dec 9 05:27:33 2024 00:32:58.579 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(327MiB/10045msec) 00:32:58.579 slat (nsec): min=6548, max=45749, avg=14732.21, stdev=6102.33 00:32:58.579 clat (usec): min=8132, max=51607, avg=11498.81, stdev=1322.05 00:32:58.579 lat (usec): min=8144, max=51616, avg=11513.54, stdev=1321.85 00:32:58.579 clat percentiles (usec): 00:32:58.579 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10421], 20.00th=[10814], 00:32:58.579 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:32:58.579 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:32:58.579 | 99.00th=[13566], 99.50th=[14091], 99.90th=[15270], 99.95th=[45876], 00:32:58.579 | 99.99th=[51643] 00:32:58.579 bw ( KiB/s): min=32768, max=34048, per=32.76%, avg=33414.74, stdev=355.63, samples=19 00:32:58.579 iops : min= 256, max= 266, avg=261.05, stdev= 2.78, samples=19 00:32:58.579 lat (msec) : 10=3.18%, 20=96.75%, 50=0.04%, 100=0.04% 00:32:58.579 cpu : usr=95.11%, sys=4.56%, ctx=29, majf=0, minf=41 00:32:58.579 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:58.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.579 issued rwts: total=2613,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.579 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:58.579 filename0: (groupid=0, jobs=1): err= 0: pid=3841211: Mon Dec 9 05:27:33 2024 00:32:58.579 read: IOPS=265, BW=33.2MiB/s (34.8MB/s)(332MiB/10003msec) 00:32:58.579 slat (nsec): min=6624, max=48291, avg=14201.05, stdev=6259.73 00:32:58.579 clat (usec): min=5271, max=14244, avg=11268.58, stdev=782.92 00:32:58.579 lat (usec): min=5284, max=14257, avg=11282.78, stdev=782.71 00:32:58.579 clat percentiles (usec): 00:32:58.579 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:32:58.579 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:32:58.579 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649], 00:32:58.579 | 99.00th=[13173], 99.50th=[13435], 99.90th=[13566], 99.95th=[13698], 00:32:58.579 | 99.99th=[14222] 00:32:58.579 bw ( KiB/s): min=33280, max=35328, per=33.35%, avg=34007.58, stdev=520.91, samples=19 00:32:58.579 iops : min= 260, max= 276, avg=265.68, stdev= 4.07, samples=19 00:32:58.579 lat (msec) : 10=5.04%, 20=94.96% 00:32:58.579 cpu : usr=94.30%, sys=5.38%, ctx=25, majf=0, minf=38 00:32:58.579 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:58.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.579 issued rwts: total=2659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.579 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:58.579 00:32:58.579 Run status group 0 (all jobs): 00:32:58.579 READ: bw=99.6MiB/s (104MB/s), 32.5MiB/s-34.0MiB/s (34.1MB/s-35.7MB/s), io=1001MiB (1049MB), run=10003-10047msec 00:32:58.579 05:27:33 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:58.579 05:27:33 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:58.579 05:27:33 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:58.579 05:27:33 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:58.579 05:27:33 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:58.579 05:27:33 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:58.579 05:27:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.579 05:27:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:58.579 05:27:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.579 05:27:33 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:58.579 05:27:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.579 05:27:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:58.579 05:27:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.579 00:32:58.579 real 0m11.097s 00:32:58.579 user 0m34.747s 00:32:58.579 sys 0m1.795s 00:32:58.579 05:27:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:58.579 05:27:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:58.579 ************************************ 00:32:58.579 END TEST fio_dif_digest 00:32:58.579 ************************************ 00:32:58.579 05:27:33 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:58.579 05:27:33 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:58.579 05:27:33 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:58.579 05:27:33 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:32:58.579 05:27:33 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:58.579 05:27:33 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:32:58.579 05:27:33 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:58.580 05:27:33 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:58.580 rmmod nvme_tcp 00:32:58.580 rmmod nvme_fabrics 00:32:58.580 rmmod nvme_keyring 00:32:58.580 05:27:33 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:58.580 05:27:33 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:32:58.580 05:27:33 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:32:58.580 05:27:33 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3832294 ']' 00:32:58.580 05:27:33 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3832294 00:32:58.580 05:27:33 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3832294 ']' 00:32:58.580 05:27:33 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3832294 00:32:58.580 05:27:33 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:32:58.580 05:27:33 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:58.580 05:27:33 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3832294 00:32:58.580 05:27:33 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:58.580 05:27:33 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:58.580 05:27:33 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3832294' 00:32:58.580 killing process with pid 3832294 00:32:58.580 05:27:33 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3832294 00:32:58.580 05:27:33 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3832294 00:32:58.580 05:27:33 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:32:58.580 05:27:33 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:59.956 Waiting for block devices as requested 00:32:59.956 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:32:59.956 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:59.956 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:59.956 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:00.214 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:00.214 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:00.214 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:00.214 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:00.473 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:00.473 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:00.473 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:00.732 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:00.732 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:00.732 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:00.732 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:00.992 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:00.992 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:00.992 05:27:37 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:00.992 05:27:37 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:00.992 05:27:37 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:33:00.992 05:27:37 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:33:00.992 05:27:37 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:00.992 05:27:37 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:33:00.992 05:27:37 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:00.992 05:27:37 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:00.992 05:27:37 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.992 05:27:37 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:00.992 05:27:37 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.529 05:27:39 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:03.529 00:33:03.529 real 1m13.182s 00:33:03.529 user 7m7.004s 00:33:03.529 sys 0m20.517s 00:33:03.529 05:27:39 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:03.529 05:27:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:03.529 ************************************ 00:33:03.529 END TEST nvmf_dif 00:33:03.529 ************************************ 00:33:03.529 05:27:39 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:03.529 05:27:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:03.530 05:27:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:03.530 05:27:39 -- common/autotest_common.sh@10 -- # set +x 00:33:03.530 ************************************ 00:33:03.530 START TEST nvmf_abort_qd_sizes 00:33:03.530 ************************************ 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:03.530 * Looking for test storage... 00:33:03.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:03.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.530 --rc genhtml_branch_coverage=1 00:33:03.530 --rc genhtml_function_coverage=1 00:33:03.530 --rc genhtml_legend=1 00:33:03.530 --rc geninfo_all_blocks=1 00:33:03.530 --rc geninfo_unexecuted_blocks=1 00:33:03.530 00:33:03.530 ' 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:03.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.530 --rc genhtml_branch_coverage=1 00:33:03.530 --rc genhtml_function_coverage=1 00:33:03.530 --rc genhtml_legend=1 00:33:03.530 --rc geninfo_all_blocks=1 00:33:03.530 --rc geninfo_unexecuted_blocks=1 00:33:03.530 00:33:03.530 ' 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:03.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.530 --rc genhtml_branch_coverage=1 00:33:03.530 --rc genhtml_function_coverage=1 00:33:03.530 --rc genhtml_legend=1 00:33:03.530 --rc geninfo_all_blocks=1 00:33:03.530 --rc geninfo_unexecuted_blocks=1 00:33:03.530 00:33:03.530 ' 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:03.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.530 --rc genhtml_branch_coverage=1 00:33:03.530 --rc genhtml_function_coverage=1 00:33:03.530 --rc genhtml_legend=1 00:33:03.530 --rc geninfo_all_blocks=1 00:33:03.530 --rc geninfo_unexecuted_blocks=1 00:33:03.530 00:33:03.530 ' 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:03.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:33:03.530 05:27:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:08.787 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.787 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:08.788 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:08.788 Found net devices under 0000:86:00.0: cvl_0_0 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:08.788 Found net devices under 0000:86:00.1: cvl_0_1 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:08.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:08.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:33:08.788 00:33:08.788 --- 10.0.0.2 ping statistics --- 00:33:08.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.788 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:08.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:08.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:33:08.788 00:33:08.788 --- 10.0.0.1 ping statistics --- 00:33:08.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.788 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:08.788 05:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:11.319 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:11.319 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:11.319 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:11.319 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:11.319 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:11.319 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:11.319 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:11.319 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:11.319 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:11.319 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:11.319 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:11.319 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:11.319 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:11.577 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:11.577 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:11.577 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:12.142 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:33:12.400 05:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:12.400 05:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:12.400 05:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:12.400 05:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:12.400 05:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:12.400 05:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:12.400 05:27:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:33:12.400 05:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:12.400 05:27:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:12.400 05:27:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:12.400 05:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3848992 00:33:12.400 05:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:12.400 05:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3848992 00:33:12.400 05:27:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3848992 ']' 00:33:12.400 05:27:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.400 05:27:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:12.400 05:27:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.400 05:27:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:12.400 05:27:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:12.400 [2024-12-09 05:27:48.983120] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:33:12.400 [2024-12-09 05:27:48.983167] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:12.659 [2024-12-09 05:27:49.052703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:12.659 [2024-12-09 05:27:49.096916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:12.659 [2024-12-09 05:27:49.096952] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:12.659 [2024-12-09 05:27:49.096959] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:12.659 [2024-12-09 05:27:49.096966] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:12.659 [2024-12-09 05:27:49.096971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:12.659 [2024-12-09 05:27:49.098411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.659 [2024-12-09 05:27:49.098508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:12.659 [2024-12-09 05:27:49.098599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:12.659 [2024-12-09 05:27:49.098601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:12.659 05:27:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:12.659 ************************************ 00:33:12.659 START TEST spdk_target_abort 00:33:12.659 ************************************ 00:33:12.659 05:27:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:33:12.659 05:27:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:12.659 05:27:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:33:12.659 05:27:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.659 05:27:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:15.943 spdk_targetn1 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:15.943 [2024-12-09 05:27:52.115485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:15.943 [2024-12-09 05:27:52.160859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:15.943 05:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:19.226 Initializing NVMe Controllers 00:33:19.226 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:19.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:19.226 Initialization complete. Launching workers. 00:33:19.226 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15073, failed: 0 00:33:19.226 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1402, failed to submit 13671 00:33:19.226 success 718, unsuccessful 684, failed 0 00:33:19.226 05:27:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:19.226 05:27:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:22.516 Initializing NVMe Controllers 00:33:22.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:22.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:22.516 Initialization complete. Launching workers. 00:33:22.516 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8647, failed: 0 00:33:22.516 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1220, failed to submit 7427 00:33:22.516 success 319, unsuccessful 901, failed 0 00:33:22.516 05:27:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:22.516 05:27:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:25.824 Initializing NVMe Controllers 00:33:25.824 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:25.824 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:25.824 Initialization complete. Launching workers. 00:33:25.824 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37679, failed: 0 00:33:25.824 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2655, failed to submit 35024 00:33:25.824 success 565, unsuccessful 2090, failed 0 00:33:25.824 05:28:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:25.824 05:28:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.824 05:28:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:25.824 05:28:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.824 05:28:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:25.824 05:28:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.824 05:28:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:27.199 05:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.199 05:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3848992 00:33:27.199 05:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3848992 ']' 00:33:27.199 05:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3848992 00:33:27.199 05:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:33:27.199 05:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:27.199 05:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3848992 00:33:27.199 05:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:27.199 05:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:27.199 05:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3848992' 00:33:27.199 killing process with pid 3848992 00:33:27.199 05:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3848992 00:33:27.199 05:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3848992 00:33:27.199 00:33:27.199 real 0m14.390s 00:33:27.199 user 0m54.740s 00:33:27.199 sys 0m2.613s 00:33:27.199 05:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:27.199 05:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:27.199 ************************************ 00:33:27.199 END TEST spdk_target_abort 00:33:27.199 ************************************ 00:33:27.199 05:28:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:27.199 05:28:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:27.199 05:28:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:27.199 05:28:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:27.199 ************************************ 00:33:27.199 START TEST kernel_target_abort 00:33:27.199 ************************************ 00:33:27.199 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:33:27.199 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:27.199 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:33:27.199 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:27.200 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:27.200 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.200 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.200 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:27.200 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:27.200 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:27.200 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:27.200 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:27.200 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:27.200 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:27.200 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:27.200 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:27.200 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:27.200 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:27.200 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:33:27.200 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:27.200 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:27.200 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:27.200 05:28:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:29.731 Waiting for block devices as requested 00:33:29.731 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:29.731 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:29.991 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:29.991 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:29.991 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:29.991 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:30.251 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:30.251 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:30.251 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:30.251 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:30.511 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:30.511 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:30.511 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:30.511 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:30.770 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:30.770 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:30.770 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:30.770 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:30.770 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:31.030 No valid GPT data, bailing 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:31.030 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:33:31.030 00:33:31.030 Discovery Log Number of Records 2, Generation counter 2 00:33:31.030 =====Discovery Log Entry 0====== 00:33:31.030 trtype: tcp 00:33:31.030 adrfam: ipv4 00:33:31.030 subtype: current discovery subsystem 00:33:31.030 treq: not specified, sq flow control disable supported 00:33:31.030 portid: 1 00:33:31.030 trsvcid: 4420 00:33:31.030 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:31.030 traddr: 10.0.0.1 00:33:31.030 eflags: none 00:33:31.030 sectype: none 00:33:31.030 =====Discovery Log Entry 1====== 00:33:31.030 trtype: tcp 00:33:31.030 adrfam: ipv4 00:33:31.030 subtype: nvme subsystem 00:33:31.030 treq: not specified, sq flow control disable supported 00:33:31.030 portid: 1 00:33:31.030 trsvcid: 4420 00:33:31.031 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:31.031 traddr: 10.0.0.1 00:33:31.031 eflags: none 00:33:31.031 sectype: none 00:33:31.031 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:31.031 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:31.031 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:31.031 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:31.031 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:31.031 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:31.031 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:31.031 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:31.031 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:31.031 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:31.031 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:31.031 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:31.031 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:31.031 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:31.031 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:31.031 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:31.031 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:31.031 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:31.031 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:31.031 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:31.031 05:28:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:34.320 Initializing NVMe Controllers 00:33:34.320 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:34.320 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:34.320 Initialization complete. Launching workers. 00:33:34.320 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 85543, failed: 0 00:33:34.320 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 85543, failed to submit 0 00:33:34.320 success 0, unsuccessful 85543, failed 0 00:33:34.320 05:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:34.320 05:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:37.604 Initializing NVMe Controllers 00:33:37.604 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:37.604 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:37.604 Initialization complete. Launching workers. 00:33:37.604 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 137706, failed: 0 00:33:37.604 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34574, failed to submit 103132 00:33:37.604 success 0, unsuccessful 34574, failed 0 00:33:37.604 05:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:37.604 05:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:40.886 Initializing NVMe Controllers 00:33:40.886 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:40.886 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:40.886 Initialization complete. Launching workers. 00:33:40.886 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 130513, failed: 0 00:33:40.886 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32662, failed to submit 97851 00:33:40.886 success 0, unsuccessful 32662, failed 0 00:33:40.886 05:28:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:40.886 05:28:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:40.886 05:28:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:33:40.886 05:28:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:40.886 05:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:40.886 05:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:40.886 05:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:40.886 05:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:40.886 05:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:40.886 05:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:42.789 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:42.789 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:42.789 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:42.789 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:42.789 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:42.789 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:42.789 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:42.789 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:42.789 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:42.789 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:42.789 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:42.789 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:42.789 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:42.789 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:42.789 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:42.789 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:43.726 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:33:43.726 00:33:43.726 real 0m16.595s 00:33:43.726 user 0m8.293s 00:33:43.726 sys 0m4.634s 00:33:43.726 05:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:43.726 05:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:43.726 ************************************ 00:33:43.726 END TEST kernel_target_abort 00:33:43.726 ************************************ 00:33:43.984 05:28:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:43.984 05:28:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:43.984 05:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:43.984 05:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:33:43.984 05:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:43.984 05:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:33:43.984 05:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:43.984 05:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:43.984 rmmod nvme_tcp 00:33:43.984 rmmod nvme_fabrics 00:33:43.984 rmmod nvme_keyring 00:33:43.984 05:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:43.984 05:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:33:43.984 05:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:33:43.984 05:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3848992 ']' 00:33:43.984 05:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3848992 00:33:43.984 05:28:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3848992 ']' 00:33:43.984 05:28:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3848992 00:33:43.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3848992) - No such process 00:33:43.984 05:28:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3848992 is not found' 00:33:43.984 Process with pid 3848992 is not found 00:33:43.984 05:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:33:43.984 05:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:46.661 Waiting for block devices as requested 00:33:46.661 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:46.661 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:46.661 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:46.661 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:46.918 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:46.918 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:46.918 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:46.918 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:47.176 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:47.176 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:47.176 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:47.434 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:47.434 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:47.434 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:47.434 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:47.691 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:47.691 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:47.691 05:28:24 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:47.691 05:28:24 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:47.691 05:28:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:33:47.691 05:28:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:33:47.691 05:28:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:47.691 05:28:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:33:47.691 05:28:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:47.691 05:28:24 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:47.692 05:28:24 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.692 05:28:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:47.692 05:28:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.221 05:28:26 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:50.221 00:33:50.221 real 0m46.634s 00:33:50.222 user 1m7.168s 00:33:50.222 sys 0m15.265s 00:33:50.222 05:28:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:50.222 05:28:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:50.222 ************************************ 00:33:50.222 END TEST nvmf_abort_qd_sizes 00:33:50.222 ************************************ 00:33:50.222 05:28:26 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:50.222 05:28:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:50.222 05:28:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:50.222 05:28:26 -- common/autotest_common.sh@10 -- # set +x 00:33:50.222 ************************************ 00:33:50.222 START TEST keyring_file 00:33:50.222 ************************************ 00:33:50.222 05:28:26 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:50.222 * Looking for test storage... 00:33:50.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:50.222 05:28:26 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:50.222 05:28:26 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:33:50.222 05:28:26 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:50.222 05:28:26 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@345 -- # : 1 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@353 -- # local d=1 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@355 -- # echo 1 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@353 -- # local d=2 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@355 -- # echo 2 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@368 -- # return 0 00:33:50.222 05:28:26 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:50.222 05:28:26 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:50.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.222 --rc genhtml_branch_coverage=1 00:33:50.222 --rc genhtml_function_coverage=1 00:33:50.222 --rc genhtml_legend=1 00:33:50.222 --rc geninfo_all_blocks=1 00:33:50.222 --rc geninfo_unexecuted_blocks=1 00:33:50.222 00:33:50.222 ' 00:33:50.222 05:28:26 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:50.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.222 --rc genhtml_branch_coverage=1 00:33:50.222 --rc genhtml_function_coverage=1 00:33:50.222 --rc genhtml_legend=1 00:33:50.222 --rc geninfo_all_blocks=1 00:33:50.222 --rc geninfo_unexecuted_blocks=1 00:33:50.222 00:33:50.222 ' 00:33:50.222 05:28:26 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:50.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.222 --rc genhtml_branch_coverage=1 00:33:50.222 --rc genhtml_function_coverage=1 00:33:50.222 --rc genhtml_legend=1 00:33:50.222 --rc geninfo_all_blocks=1 00:33:50.222 --rc geninfo_unexecuted_blocks=1 00:33:50.222 00:33:50.222 ' 00:33:50.222 05:28:26 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:50.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.222 --rc genhtml_branch_coverage=1 00:33:50.222 --rc genhtml_function_coverage=1 00:33:50.222 --rc genhtml_legend=1 00:33:50.222 --rc geninfo_all_blocks=1 00:33:50.222 --rc geninfo_unexecuted_blocks=1 00:33:50.222 00:33:50.222 ' 00:33:50.222 05:28:26 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:50.222 05:28:26 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:50.222 05:28:26 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:50.222 05:28:26 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.222 05:28:26 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.222 05:28:26 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.222 05:28:26 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:50.222 05:28:26 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@51 -- # : 0 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:50.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:50.222 05:28:26 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:50.222 05:28:26 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:50.222 05:28:26 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:50.222 05:28:26 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:50.222 05:28:26 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:50.222 05:28:26 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:50.222 05:28:26 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:50.222 05:28:26 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:50.222 05:28:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:50.222 05:28:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:50.222 05:28:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:50.223 05:28:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:50.223 05:28:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:50.223 05:28:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.zpaXQNYOf0 00:33:50.223 05:28:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:50.223 05:28:26 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:50.223 05:28:26 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:50.223 05:28:26 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:50.223 05:28:26 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:50.223 05:28:26 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:50.223 05:28:26 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:50.223 05:28:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.zpaXQNYOf0 00:33:50.223 05:28:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.zpaXQNYOf0 00:33:50.223 05:28:26 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.zpaXQNYOf0 00:33:50.223 05:28:26 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:50.223 05:28:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:50.223 05:28:26 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:50.223 05:28:26 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:50.223 05:28:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:50.223 05:28:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:50.223 05:28:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.X7SbGmBMnY 00:33:50.223 05:28:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:50.223 05:28:26 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:50.223 05:28:26 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:50.223 05:28:26 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:50.223 05:28:26 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:33:50.223 05:28:26 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:50.223 05:28:26 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:50.223 05:28:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.X7SbGmBMnY 00:33:50.223 05:28:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.X7SbGmBMnY 00:33:50.223 05:28:26 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.X7SbGmBMnY 00:33:50.223 05:28:26 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:50.223 05:28:26 keyring_file -- keyring/file.sh@30 -- # tgtpid=3857732 00:33:50.223 05:28:26 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3857732 00:33:50.223 05:28:26 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3857732 ']' 00:33:50.223 05:28:26 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.223 05:28:26 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:50.223 05:28:26 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.223 05:28:26 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:50.223 05:28:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:50.223 [2024-12-09 05:28:26.803688] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:33:50.223 [2024-12-09 05:28:26.803737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3857732 ] 00:33:50.481 [2024-12-09 05:28:26.869326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.481 [2024-12-09 05:28:26.912429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.481 05:28:27 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:50.481 05:28:27 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:33:50.481 05:28:27 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:50.481 05:28:27 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.481 05:28:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:50.481 [2024-12-09 05:28:27.122532] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:50.739 null0 00:33:50.739 [2024-12-09 05:28:27.154586] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:50.739 [2024-12-09 05:28:27.154909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:50.739 05:28:27 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.739 05:28:27 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:50.739 05:28:27 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:50.739 05:28:27 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:50.739 05:28:27 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:50.739 05:28:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:50.739 05:28:27 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:50.739 05:28:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:50.739 05:28:27 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:50.739 05:28:27 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.739 05:28:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:50.739 [2024-12-09 05:28:27.182648] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:50.739 request: 00:33:50.739 { 00:33:50.739 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:50.739 "secure_channel": false, 00:33:50.739 "listen_address": { 00:33:50.739 "trtype": "tcp", 00:33:50.739 "traddr": "127.0.0.1", 00:33:50.739 "trsvcid": "4420" 00:33:50.739 }, 00:33:50.739 "method": "nvmf_subsystem_add_listener", 00:33:50.739 "req_id": 1 00:33:50.739 } 00:33:50.739 Got JSON-RPC error response 00:33:50.739 response: 00:33:50.739 { 00:33:50.739 "code": -32602, 00:33:50.739 "message": "Invalid parameters" 00:33:50.739 } 00:33:50.739 05:28:27 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:50.739 05:28:27 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:50.739 05:28:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:50.739 05:28:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:50.739 05:28:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:50.739 05:28:27 keyring_file -- keyring/file.sh@47 -- # bperfpid=3857763 00:33:50.739 05:28:27 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:50.739 05:28:27 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3857763 /var/tmp/bperf.sock 00:33:50.739 05:28:27 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3857763 ']' 00:33:50.739 05:28:27 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:50.739 05:28:27 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:50.739 05:28:27 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:50.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:50.739 05:28:27 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:50.739 05:28:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:50.739 [2024-12-09 05:28:27.235056] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:33:50.739 [2024-12-09 05:28:27.235100] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3857763 ] 00:33:50.739 [2024-12-09 05:28:27.298252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.739 [2024-12-09 05:28:27.338883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:50.997 05:28:27 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:50.997 05:28:27 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:33:50.997 05:28:27 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zpaXQNYOf0 00:33:50.997 05:28:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zpaXQNYOf0 00:33:50.997 05:28:27 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.X7SbGmBMnY 00:33:50.997 05:28:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.X7SbGmBMnY 00:33:51.255 05:28:27 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:33:51.255 05:28:27 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:51.255 05:28:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:51.255 05:28:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:51.255 05:28:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:51.513 05:28:28 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.zpaXQNYOf0 == \/\t\m\p\/\t\m\p\.\z\p\a\X\Q\N\Y\O\f\0 ]] 00:33:51.513 05:28:28 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:33:51.513 05:28:28 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:33:51.513 05:28:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:51.513 05:28:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:51.513 05:28:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:51.770 05:28:28 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.X7SbGmBMnY == \/\t\m\p\/\t\m\p\.\X\7\S\b\G\m\B\M\n\Y ]] 00:33:51.770 05:28:28 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:33:51.770 05:28:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:51.770 05:28:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:51.770 05:28:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:51.770 05:28:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:51.770 05:28:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:52.028 05:28:28 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:52.028 05:28:28 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:33:52.028 05:28:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:52.028 05:28:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:52.028 05:28:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:52.028 05:28:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:52.028 05:28:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:52.028 05:28:28 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:33:52.028 05:28:28 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:52.028 05:28:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:52.285 [2024-12-09 05:28:28.810890] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:52.285 nvme0n1 00:33:52.285 05:28:28 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:33:52.285 05:28:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:52.285 05:28:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:52.285 05:28:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:52.285 05:28:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:52.285 05:28:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:52.543 05:28:29 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:33:52.543 05:28:29 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:33:52.543 05:28:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:52.543 05:28:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:52.543 05:28:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:52.543 05:28:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:52.543 05:28:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:52.801 05:28:29 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:33:52.801 05:28:29 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:52.801 Running I/O for 1 seconds... 00:33:53.991 15878.00 IOPS, 62.02 MiB/s 00:33:53.991 Latency(us) 00:33:53.991 [2024-12-09T04:28:30.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.991 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:53.991 nvme0n1 : 1.01 15912.56 62.16 0.00 0.00 8022.21 4188.61 20173.69 00:33:53.991 [2024-12-09T04:28:30.637Z] =================================================================================================================== 00:33:53.991 [2024-12-09T04:28:30.637Z] Total : 15912.56 62.16 0.00 0.00 8022.21 4188.61 20173.69 00:33:53.991 { 00:33:53.991 "results": [ 00:33:53.992 { 00:33:53.992 "job": "nvme0n1", 00:33:53.992 "core_mask": "0x2", 00:33:53.992 "workload": "randrw", 00:33:53.992 "percentage": 50, 00:33:53.992 "status": "finished", 00:33:53.992 "queue_depth": 128, 00:33:53.992 "io_size": 4096, 00:33:53.992 "runtime": 1.005935, 00:33:53.992 "iops": 15912.558962557223, 00:33:53.992 "mibps": 62.15843344748915, 00:33:53.992 "io_failed": 0, 00:33:53.992 "io_timeout": 0, 00:33:53.992 "avg_latency_us": 8022.205151333249, 00:33:53.992 "min_latency_us": 4188.605217391304, 00:33:53.992 "max_latency_us": 20173.69043478261 00:33:53.992 } 00:33:53.992 ], 00:33:53.992 "core_count": 1 00:33:53.992 } 00:33:53.992 05:28:30 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:53.992 05:28:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:53.992 05:28:30 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:33:53.992 05:28:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:53.992 05:28:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:53.992 05:28:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:53.992 05:28:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:53.992 05:28:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:54.250 05:28:30 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:54.250 05:28:30 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:33:54.250 05:28:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:54.250 05:28:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:54.250 05:28:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:54.250 05:28:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:54.250 05:28:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:54.507 05:28:31 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:33:54.507 05:28:31 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:54.507 05:28:31 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:54.508 05:28:31 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:54.508 05:28:31 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:54.508 05:28:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:54.508 05:28:31 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:54.508 05:28:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:54.508 05:28:31 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:54.508 05:28:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:54.764 [2024-12-09 05:28:31.208291] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:54.764 [2024-12-09 05:28:31.208975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9a3f0 (107): Transport endpoint is not connected 00:33:54.764 [2024-12-09 05:28:31.209970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9a3f0 (9): Bad file descriptor 00:33:54.765 [2024-12-09 05:28:31.210972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:33:54.765 [2024-12-09 05:28:31.210982] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:54.765 [2024-12-09 05:28:31.210990] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:33:54.765 [2024-12-09 05:28:31.211001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:33:54.765 request: 00:33:54.765 { 00:33:54.765 "name": "nvme0", 00:33:54.765 "trtype": "tcp", 00:33:54.765 "traddr": "127.0.0.1", 00:33:54.765 "adrfam": "ipv4", 00:33:54.765 "trsvcid": "4420", 00:33:54.765 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:54.765 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:54.765 "prchk_reftag": false, 00:33:54.765 "prchk_guard": false, 00:33:54.765 "hdgst": false, 00:33:54.765 "ddgst": false, 00:33:54.765 "psk": "key1", 00:33:54.765 "allow_unrecognized_csi": false, 00:33:54.765 "method": "bdev_nvme_attach_controller", 00:33:54.765 "req_id": 1 00:33:54.765 } 00:33:54.765 Got JSON-RPC error response 00:33:54.765 response: 00:33:54.765 { 00:33:54.765 "code": -5, 00:33:54.765 "message": "Input/output error" 00:33:54.765 } 00:33:54.765 05:28:31 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:54.765 05:28:31 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:54.765 05:28:31 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:54.765 05:28:31 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:54.765 05:28:31 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:33:54.765 05:28:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:54.765 05:28:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:54.765 05:28:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:54.765 05:28:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:54.765 05:28:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:55.023 05:28:31 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:55.023 05:28:31 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:33:55.023 05:28:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:55.023 05:28:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:55.023 05:28:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:55.023 05:28:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:55.023 05:28:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:55.023 05:28:31 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:33:55.023 05:28:31 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:33:55.023 05:28:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:55.281 05:28:31 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:33:55.281 05:28:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:55.539 05:28:32 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:33:55.539 05:28:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:55.539 05:28:32 keyring_file -- keyring/file.sh@78 -- # jq length 00:33:55.797 05:28:32 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:33:55.797 05:28:32 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.zpaXQNYOf0 00:33:55.797 05:28:32 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.zpaXQNYOf0 00:33:55.797 05:28:32 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:55.797 05:28:32 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.zpaXQNYOf0 00:33:55.797 05:28:32 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:55.797 05:28:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:55.797 05:28:32 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:55.797 05:28:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:55.797 05:28:32 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zpaXQNYOf0 00:33:55.797 05:28:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zpaXQNYOf0 00:33:55.797 [2024-12-09 05:28:32.380081] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zpaXQNYOf0': 0100660 00:33:55.797 [2024-12-09 05:28:32.380107] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:55.797 request: 00:33:55.797 { 00:33:55.797 "name": "key0", 00:33:55.797 "path": "/tmp/tmp.zpaXQNYOf0", 00:33:55.797 "method": "keyring_file_add_key", 00:33:55.797 "req_id": 1 00:33:55.797 } 00:33:55.797 Got JSON-RPC error response 00:33:55.797 response: 00:33:55.797 { 00:33:55.797 "code": -1, 00:33:55.797 "message": "Operation not permitted" 00:33:55.797 } 00:33:55.797 05:28:32 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:55.797 05:28:32 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:55.797 05:28:32 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:55.797 05:28:32 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:55.797 05:28:32 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.zpaXQNYOf0 00:33:55.797 05:28:32 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zpaXQNYOf0 00:33:55.797 05:28:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zpaXQNYOf0 00:33:56.055 05:28:32 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.zpaXQNYOf0 00:33:56.055 05:28:32 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:33:56.055 05:28:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:56.055 05:28:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:56.055 05:28:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:56.055 05:28:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:56.055 05:28:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:56.313 05:28:32 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:33:56.313 05:28:32 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:56.313 05:28:32 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:56.313 05:28:32 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:56.313 05:28:32 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:56.313 05:28:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:56.313 05:28:32 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:56.313 05:28:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:56.313 05:28:32 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:56.313 05:28:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:56.571 [2024-12-09 05:28:32.965682] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.zpaXQNYOf0': No such file or directory 00:33:56.571 [2024-12-09 05:28:32.965704] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:56.571 [2024-12-09 05:28:32.965720] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:56.571 [2024-12-09 05:28:32.965727] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:33:56.571 [2024-12-09 05:28:32.965735] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:56.571 [2024-12-09 05:28:32.965741] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:56.571 request: 00:33:56.571 { 00:33:56.571 "name": "nvme0", 00:33:56.571 "trtype": "tcp", 00:33:56.571 "traddr": "127.0.0.1", 00:33:56.571 "adrfam": "ipv4", 00:33:56.571 "trsvcid": "4420", 00:33:56.571 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:56.571 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:56.571 "prchk_reftag": false, 00:33:56.571 "prchk_guard": false, 00:33:56.571 "hdgst": false, 00:33:56.571 "ddgst": false, 00:33:56.571 "psk": "key0", 00:33:56.571 "allow_unrecognized_csi": false, 00:33:56.571 "method": "bdev_nvme_attach_controller", 00:33:56.571 "req_id": 1 00:33:56.571 } 00:33:56.571 Got JSON-RPC error response 00:33:56.571 response: 00:33:56.571 { 00:33:56.571 "code": -19, 00:33:56.571 "message": "No such device" 00:33:56.571 } 00:33:56.571 05:28:32 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:56.571 05:28:32 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:56.571 05:28:32 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:56.571 05:28:32 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:56.571 05:28:32 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:33:56.571 05:28:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:56.571 05:28:33 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:56.571 05:28:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:56.571 05:28:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:56.571 05:28:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:56.571 05:28:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:56.571 05:28:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:56.571 05:28:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eCYTAcIdom 00:33:56.571 05:28:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:56.571 05:28:33 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:56.571 05:28:33 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:56.571 05:28:33 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:56.571 05:28:33 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:56.571 05:28:33 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:56.571 05:28:33 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:56.830 05:28:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eCYTAcIdom 00:33:56.830 05:28:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eCYTAcIdom 00:33:56.830 05:28:33 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.eCYTAcIdom 00:33:56.830 05:28:33 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eCYTAcIdom 00:33:56.830 05:28:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eCYTAcIdom 00:33:56.830 05:28:33 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:56.830 05:28:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:57.087 nvme0n1 00:33:57.087 05:28:33 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:33:57.087 05:28:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:57.087 05:28:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:57.087 05:28:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:57.087 05:28:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:57.087 05:28:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:57.345 05:28:33 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:33:57.346 05:28:33 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:33:57.346 05:28:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:57.603 05:28:34 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:33:57.603 05:28:34 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:33:57.603 05:28:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:57.603 05:28:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:57.603 05:28:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:57.860 05:28:34 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:33:57.860 05:28:34 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:33:57.860 05:28:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:57.860 05:28:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:57.860 05:28:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:57.860 05:28:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:57.860 05:28:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:57.860 05:28:34 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:33:57.860 05:28:34 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:57.860 05:28:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:58.116 05:28:34 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:33:58.116 05:28:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:58.116 05:28:34 keyring_file -- keyring/file.sh@105 -- # jq length 00:33:58.373 05:28:34 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:33:58.373 05:28:34 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eCYTAcIdom 00:33:58.373 05:28:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eCYTAcIdom 00:33:58.629 05:28:35 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.X7SbGmBMnY 00:33:58.629 05:28:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.X7SbGmBMnY 00:33:58.630 05:28:35 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:58.630 05:28:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:58.886 nvme0n1 00:33:58.886 05:28:35 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:33:58.886 05:28:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:59.143 05:28:35 keyring_file -- keyring/file.sh@113 -- # config='{ 00:33:59.143 "subsystems": [ 00:33:59.143 { 00:33:59.143 "subsystem": "keyring", 00:33:59.143 "config": [ 00:33:59.143 { 00:33:59.143 "method": "keyring_file_add_key", 00:33:59.143 "params": { 00:33:59.143 "name": "key0", 00:33:59.143 "path": "/tmp/tmp.eCYTAcIdom" 00:33:59.143 } 00:33:59.143 }, 00:33:59.143 { 00:33:59.143 "method": "keyring_file_add_key", 00:33:59.143 "params": { 00:33:59.143 "name": "key1", 00:33:59.143 "path": "/tmp/tmp.X7SbGmBMnY" 00:33:59.143 } 00:33:59.143 } 00:33:59.143 ] 00:33:59.143 }, 00:33:59.143 { 00:33:59.143 "subsystem": "iobuf", 00:33:59.143 "config": [ 00:33:59.143 { 00:33:59.143 "method": "iobuf_set_options", 00:33:59.143 "params": { 00:33:59.143 "small_pool_count": 8192, 00:33:59.143 "large_pool_count": 1024, 00:33:59.143 "small_bufsize": 8192, 00:33:59.143 "large_bufsize": 135168, 00:33:59.143 "enable_numa": false 00:33:59.143 } 00:33:59.143 } 00:33:59.143 ] 00:33:59.143 }, 00:33:59.143 { 00:33:59.143 "subsystem": "sock", 00:33:59.143 "config": [ 00:33:59.143 { 00:33:59.143 "method": "sock_set_default_impl", 00:33:59.143 "params": { 00:33:59.143 "impl_name": "posix" 00:33:59.143 } 00:33:59.143 }, 00:33:59.143 { 00:33:59.143 "method": "sock_impl_set_options", 00:33:59.143 "params": { 00:33:59.143 "impl_name": "ssl", 00:33:59.143 "recv_buf_size": 4096, 00:33:59.143 "send_buf_size": 4096, 00:33:59.143 "enable_recv_pipe": true, 00:33:59.143 "enable_quickack": false, 00:33:59.143 "enable_placement_id": 0, 00:33:59.143 "enable_zerocopy_send_server": true, 00:33:59.143 "enable_zerocopy_send_client": false, 00:33:59.143 "zerocopy_threshold": 0, 00:33:59.143 "tls_version": 0, 00:33:59.143 "enable_ktls": false 00:33:59.143 } 00:33:59.143 }, 00:33:59.143 { 00:33:59.143 "method": "sock_impl_set_options", 00:33:59.143 "params": { 00:33:59.143 "impl_name": "posix", 00:33:59.143 "recv_buf_size": 2097152, 00:33:59.143 "send_buf_size": 2097152, 00:33:59.143 "enable_recv_pipe": true, 00:33:59.143 "enable_quickack": false, 00:33:59.143 "enable_placement_id": 0, 00:33:59.143 "enable_zerocopy_send_server": true, 00:33:59.143 "enable_zerocopy_send_client": false, 00:33:59.143 "zerocopy_threshold": 0, 00:33:59.143 "tls_version": 0, 00:33:59.143 "enable_ktls": false 00:33:59.143 } 00:33:59.143 } 00:33:59.143 ] 00:33:59.143 }, 00:33:59.143 { 00:33:59.143 "subsystem": "vmd", 00:33:59.143 "config": [] 00:33:59.143 }, 00:33:59.143 { 00:33:59.143 "subsystem": "accel", 00:33:59.143 "config": [ 00:33:59.143 { 00:33:59.143 "method": "accel_set_options", 00:33:59.143 "params": { 00:33:59.143 "small_cache_size": 128, 00:33:59.143 "large_cache_size": 16, 00:33:59.143 "task_count": 2048, 00:33:59.143 "sequence_count": 2048, 00:33:59.143 "buf_count": 2048 00:33:59.143 } 00:33:59.143 } 00:33:59.143 ] 00:33:59.143 }, 00:33:59.143 { 00:33:59.143 "subsystem": "bdev", 00:33:59.143 "config": [ 00:33:59.143 { 00:33:59.143 "method": "bdev_set_options", 00:33:59.143 "params": { 00:33:59.143 "bdev_io_pool_size": 65535, 00:33:59.143 "bdev_io_cache_size": 256, 00:33:59.143 "bdev_auto_examine": true, 00:33:59.143 "iobuf_small_cache_size": 128, 00:33:59.143 "iobuf_large_cache_size": 16 00:33:59.143 } 00:33:59.143 }, 00:33:59.143 { 00:33:59.143 "method": "bdev_raid_set_options", 00:33:59.143 "params": { 00:33:59.143 "process_window_size_kb": 1024, 00:33:59.143 "process_max_bandwidth_mb_sec": 0 00:33:59.143 } 00:33:59.143 }, 00:33:59.143 { 00:33:59.143 "method": "bdev_iscsi_set_options", 00:33:59.143 "params": { 00:33:59.143 "timeout_sec": 30 00:33:59.143 } 00:33:59.143 }, 00:33:59.143 { 00:33:59.143 "method": "bdev_nvme_set_options", 00:33:59.143 "params": { 00:33:59.143 "action_on_timeout": "none", 00:33:59.143 "timeout_us": 0, 00:33:59.143 "timeout_admin_us": 0, 00:33:59.143 "keep_alive_timeout_ms": 10000, 00:33:59.143 "arbitration_burst": 0, 00:33:59.143 "low_priority_weight": 0, 00:33:59.143 "medium_priority_weight": 0, 00:33:59.143 "high_priority_weight": 0, 00:33:59.143 "nvme_adminq_poll_period_us": 10000, 00:33:59.143 "nvme_ioq_poll_period_us": 0, 00:33:59.143 "io_queue_requests": 512, 00:33:59.143 "delay_cmd_submit": true, 00:33:59.143 "transport_retry_count": 4, 00:33:59.143 "bdev_retry_count": 3, 00:33:59.143 "transport_ack_timeout": 0, 00:33:59.143 "ctrlr_loss_timeout_sec": 0, 00:33:59.143 "reconnect_delay_sec": 0, 00:33:59.143 "fast_io_fail_timeout_sec": 0, 00:33:59.143 "disable_auto_failback": false, 00:33:59.143 "generate_uuids": false, 00:33:59.143 "transport_tos": 0, 00:33:59.143 "nvme_error_stat": false, 00:33:59.143 "rdma_srq_size": 0, 00:33:59.143 "io_path_stat": false, 00:33:59.143 "allow_accel_sequence": false, 00:33:59.143 "rdma_max_cq_size": 0, 00:33:59.143 "rdma_cm_event_timeout_ms": 0, 00:33:59.143 "dhchap_digests": [ 00:33:59.143 "sha256", 00:33:59.143 "sha384", 00:33:59.143 "sha512" 00:33:59.143 ], 00:33:59.143 "dhchap_dhgroups": [ 00:33:59.143 "null", 00:33:59.143 "ffdhe2048", 00:33:59.143 "ffdhe3072", 00:33:59.143 "ffdhe4096", 00:33:59.143 "ffdhe6144", 00:33:59.143 "ffdhe8192" 00:33:59.143 ] 00:33:59.143 } 00:33:59.143 }, 00:33:59.143 { 00:33:59.143 "method": "bdev_nvme_attach_controller", 00:33:59.143 "params": { 00:33:59.143 "name": "nvme0", 00:33:59.143 "trtype": "TCP", 00:33:59.143 "adrfam": "IPv4", 00:33:59.143 "traddr": "127.0.0.1", 00:33:59.143 "trsvcid": "4420", 00:33:59.143 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:59.143 "prchk_reftag": false, 00:33:59.143 "prchk_guard": false, 00:33:59.143 "ctrlr_loss_timeout_sec": 0, 00:33:59.143 "reconnect_delay_sec": 0, 00:33:59.143 "fast_io_fail_timeout_sec": 0, 00:33:59.143 "psk": "key0", 00:33:59.143 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:59.143 "hdgst": false, 00:33:59.143 "ddgst": false, 00:33:59.143 "multipath": "multipath" 00:33:59.143 } 00:33:59.143 }, 00:33:59.143 { 00:33:59.143 "method": "bdev_nvme_set_hotplug", 00:33:59.143 "params": { 00:33:59.143 "period_us": 100000, 00:33:59.143 "enable": false 00:33:59.143 } 00:33:59.143 }, 00:33:59.143 { 00:33:59.143 "method": "bdev_wait_for_examine" 00:33:59.143 } 00:33:59.143 ] 00:33:59.143 }, 00:33:59.143 { 00:33:59.143 "subsystem": "nbd", 00:33:59.143 "config": [] 00:33:59.143 } 00:33:59.143 ] 00:33:59.143 }' 00:33:59.143 05:28:35 keyring_file -- keyring/file.sh@115 -- # killprocess 3857763 00:33:59.143 05:28:35 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3857763 ']' 00:33:59.143 05:28:35 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3857763 00:33:59.143 05:28:35 keyring_file -- common/autotest_common.sh@959 -- # uname 00:33:59.143 05:28:35 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:59.143 05:28:35 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3857763 00:33:59.401 05:28:35 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:59.401 05:28:35 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:59.401 05:28:35 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3857763' 00:33:59.401 killing process with pid 3857763 00:33:59.401 05:28:35 keyring_file -- common/autotest_common.sh@973 -- # kill 3857763 00:33:59.401 Received shutdown signal, test time was about 1.000000 seconds 00:33:59.401 00:33:59.401 Latency(us) 00:33:59.401 [2024-12-09T04:28:36.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:59.401 [2024-12-09T04:28:36.047Z] =================================================================================================================== 00:33:59.401 [2024-12-09T04:28:36.047Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:59.401 05:28:35 keyring_file -- common/autotest_common.sh@978 -- # wait 3857763 00:33:59.401 05:28:35 keyring_file -- keyring/file.sh@118 -- # bperfpid=3859282 00:33:59.401 05:28:35 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3859282 /var/tmp/bperf.sock 00:33:59.401 05:28:35 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3859282 ']' 00:33:59.401 05:28:35 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:59.401 05:28:35 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:59.401 05:28:35 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:59.401 05:28:35 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:59.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:59.401 05:28:35 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:33:59.401 "subsystems": [ 00:33:59.401 { 00:33:59.401 "subsystem": "keyring", 00:33:59.401 "config": [ 00:33:59.401 { 00:33:59.401 "method": "keyring_file_add_key", 00:33:59.401 "params": { 00:33:59.401 "name": "key0", 00:33:59.401 "path": "/tmp/tmp.eCYTAcIdom" 00:33:59.401 } 00:33:59.401 }, 00:33:59.401 { 00:33:59.401 "method": "keyring_file_add_key", 00:33:59.401 "params": { 00:33:59.401 "name": "key1", 00:33:59.401 "path": "/tmp/tmp.X7SbGmBMnY" 00:33:59.401 } 00:33:59.401 } 00:33:59.401 ] 00:33:59.401 }, 00:33:59.401 { 00:33:59.401 "subsystem": "iobuf", 00:33:59.401 "config": [ 00:33:59.401 { 00:33:59.401 "method": "iobuf_set_options", 00:33:59.401 "params": { 00:33:59.401 "small_pool_count": 8192, 00:33:59.401 "large_pool_count": 1024, 00:33:59.401 "small_bufsize": 8192, 00:33:59.401 "large_bufsize": 135168, 00:33:59.401 "enable_numa": false 00:33:59.401 } 00:33:59.401 } 00:33:59.401 ] 00:33:59.401 }, 00:33:59.401 { 00:33:59.401 "subsystem": "sock", 00:33:59.401 "config": [ 00:33:59.401 { 00:33:59.401 "method": "sock_set_default_impl", 00:33:59.401 "params": { 00:33:59.401 "impl_name": "posix" 00:33:59.401 } 00:33:59.401 }, 00:33:59.401 { 00:33:59.401 "method": "sock_impl_set_options", 00:33:59.401 "params": { 00:33:59.401 "impl_name": "ssl", 00:33:59.401 "recv_buf_size": 4096, 00:33:59.401 "send_buf_size": 4096, 00:33:59.401 "enable_recv_pipe": true, 00:33:59.401 "enable_quickack": false, 00:33:59.401 "enable_placement_id": 0, 00:33:59.401 "enable_zerocopy_send_server": true, 00:33:59.401 "enable_zerocopy_send_client": false, 00:33:59.401 "zerocopy_threshold": 0, 00:33:59.401 "tls_version": 0, 00:33:59.401 "enable_ktls": false 00:33:59.401 } 00:33:59.401 }, 00:33:59.401 { 00:33:59.401 "method": "sock_impl_set_options", 00:33:59.401 "params": { 00:33:59.401 "impl_name": "posix", 00:33:59.401 "recv_buf_size": 2097152, 00:33:59.401 "send_buf_size": 2097152, 00:33:59.401 "enable_recv_pipe": true, 00:33:59.401 "enable_quickack": false, 00:33:59.401 "enable_placement_id": 0, 00:33:59.401 "enable_zerocopy_send_server": true, 00:33:59.401 "enable_zerocopy_send_client": false, 00:33:59.401 "zerocopy_threshold": 0, 00:33:59.401 "tls_version": 0, 00:33:59.401 "enable_ktls": false 00:33:59.401 } 00:33:59.401 } 00:33:59.401 ] 00:33:59.401 }, 00:33:59.401 { 00:33:59.401 "subsystem": "vmd", 00:33:59.401 "config": [] 00:33:59.401 }, 00:33:59.401 { 00:33:59.401 "subsystem": "accel", 00:33:59.401 "config": [ 00:33:59.401 { 00:33:59.401 "method": "accel_set_options", 00:33:59.401 "params": { 00:33:59.401 "small_cache_size": 128, 00:33:59.401 "large_cache_size": 16, 00:33:59.401 "task_count": 2048, 00:33:59.401 "sequence_count": 2048, 00:33:59.401 "buf_count": 2048 00:33:59.401 } 00:33:59.401 } 00:33:59.401 ] 00:33:59.401 }, 00:33:59.401 { 00:33:59.401 "subsystem": "bdev", 00:33:59.401 "config": [ 00:33:59.401 { 00:33:59.402 "method": "bdev_set_options", 00:33:59.402 "params": { 00:33:59.402 "bdev_io_pool_size": 65535, 00:33:59.402 "bdev_io_cache_size": 256, 00:33:59.402 "bdev_auto_examine": true, 00:33:59.402 "iobuf_small_cache_size": 128, 00:33:59.402 "iobuf_large_cache_size": 16 00:33:59.402 } 00:33:59.402 }, 00:33:59.402 { 00:33:59.402 "method": "bdev_raid_set_options", 00:33:59.402 "params": { 00:33:59.402 "process_window_size_kb": 1024, 00:33:59.402 "process_max_bandwidth_mb_sec": 0 00:33:59.402 } 00:33:59.402 }, 00:33:59.402 { 00:33:59.402 "method": "bdev_iscsi_set_options", 00:33:59.402 "params": { 00:33:59.402 "timeout_sec": 30 00:33:59.402 } 00:33:59.402 }, 00:33:59.402 { 00:33:59.402 "method": "bdev_nvme_set_options", 00:33:59.402 "params": { 00:33:59.402 "action_on_timeout": "none", 00:33:59.402 "timeout_us": 0, 00:33:59.402 "timeout_admin_us": 0, 00:33:59.402 "keep_alive_timeout_ms": 10000, 00:33:59.402 "arbitration_burst": 0, 00:33:59.402 "low_priority_weight": 0, 00:33:59.402 "medium_priority_weight": 0, 00:33:59.402 "high_priority_weight": 0, 00:33:59.402 "nvme_adminq_poll_period_us": 10000, 00:33:59.402 "nvme_ioq_poll_period_us": 0, 00:33:59.402 "io_queue_requests": 512, 00:33:59.402 "delay_cmd_submit": true, 00:33:59.402 "transport_retry_count": 4, 00:33:59.402 "bdev_retry_count": 3, 00:33:59.402 "transport_ack_timeout": 0, 00:33:59.402 "ctrlr_loss_timeout_sec": 0, 00:33:59.402 "reconnect_delay_sec": 0, 00:33:59.402 "fast_io_fail_timeout_sec": 0, 00:33:59.402 "disable_auto_failback": false, 00:33:59.402 "generate_uuids": false, 00:33:59.402 "transport_tos": 0, 00:33:59.402 "nvme_error_stat": false, 00:33:59.402 "rdma_srq_size": 0, 00:33:59.402 "io_path_stat": false, 00:33:59.402 "allow_accel_sequence": false, 00:33:59.402 "rdma_max_cq_size": 0, 00:33:59.402 "rdma_cm_event_timeout_ms": 0, 00:33:59.402 "dhchap_digests": [ 00:33:59.402 "sha256", 00:33:59.402 "sha384", 00:33:59.402 "sha512" 00:33:59.402 ], 00:33:59.402 "dhchap_dhgroups": [ 00:33:59.402 "null", 00:33:59.402 "ffdhe2048", 00:33:59.402 "ffdhe3072", 00:33:59.402 "ffdhe4096", 00:33:59.402 "ffdhe6144", 00:33:59.402 "ffdhe8192" 00:33:59.402 ] 00:33:59.402 } 00:33:59.402 }, 00:33:59.402 { 00:33:59.402 "method": "bdev_nvme_attach_controller", 00:33:59.402 "params": { 00:33:59.402 "name": "nvme0", 00:33:59.402 "trtype": "TCP", 00:33:59.402 "adrfam": "IPv4", 00:33:59.402 "traddr": "127.0.0.1", 00:33:59.402 "trsvcid": "4420", 00:33:59.402 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:59.402 "prchk_reftag": false, 00:33:59.402 "prchk_guard": false, 00:33:59.402 "ctrlr_loss_timeout_sec": 0, 00:33:59.402 "reconnect_delay_sec": 0, 00:33:59.402 "fast_io_fail_timeout_sec": 0, 00:33:59.402 "psk": "key0", 00:33:59.402 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:59.402 "hdgst": false, 00:33:59.402 "ddgst": false, 00:33:59.402 "multipath": "multipath" 00:33:59.402 } 00:33:59.402 }, 00:33:59.402 { 00:33:59.402 "method": "bdev_nvme_set_hotplug", 00:33:59.402 "params": { 00:33:59.402 "period_us": 100000, 00:33:59.402 "enable": false 00:33:59.402 } 00:33:59.402 }, 00:33:59.402 { 00:33:59.402 "method": "bdev_wait_for_examine" 00:33:59.402 } 00:33:59.402 ] 00:33:59.402 }, 00:33:59.402 { 00:33:59.402 "subsystem": "nbd", 00:33:59.402 "config": [] 00:33:59.402 } 00:33:59.402 ] 00:33:59.402 }' 00:33:59.402 05:28:35 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:59.402 05:28:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:59.402 [2024-12-09 05:28:36.031548] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:33:59.402 [2024-12-09 05:28:36.031597] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3859282 ] 00:33:59.659 [2024-12-09 05:28:36.096035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:59.659 [2024-12-09 05:28:36.136727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:59.659 [2024-12-09 05:28:36.299262] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:00.226 05:28:36 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:00.226 05:28:36 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:00.226 05:28:36 keyring_file -- keyring/file.sh@121 -- # jq length 00:34:00.226 05:28:36 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:34:00.226 05:28:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:00.484 05:28:37 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:34:00.484 05:28:37 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:34:00.484 05:28:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:00.484 05:28:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:00.484 05:28:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:00.484 05:28:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:00.484 05:28:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:00.743 05:28:37 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:34:00.743 05:28:37 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:34:00.743 05:28:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:00.743 05:28:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:00.743 05:28:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:00.743 05:28:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:00.743 05:28:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:01.002 05:28:37 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:34:01.002 05:28:37 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:34:01.002 05:28:37 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:34:01.002 05:28:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:34:01.002 05:28:37 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:34:01.002 05:28:37 keyring_file -- keyring/file.sh@1 -- # cleanup 00:34:01.002 05:28:37 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.eCYTAcIdom /tmp/tmp.X7SbGmBMnY 00:34:01.002 05:28:37 keyring_file -- keyring/file.sh@20 -- # killprocess 3859282 00:34:01.002 05:28:37 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3859282 ']' 00:34:01.002 05:28:37 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3859282 00:34:01.002 05:28:37 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:01.002 05:28:37 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:01.002 05:28:37 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3859282 00:34:01.261 05:28:37 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:01.261 05:28:37 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:01.261 05:28:37 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3859282' 00:34:01.261 killing process with pid 3859282 00:34:01.261 05:28:37 keyring_file -- common/autotest_common.sh@973 -- # kill 3859282 00:34:01.261 Received shutdown signal, test time was about 1.000000 seconds 00:34:01.261 00:34:01.261 Latency(us) 00:34:01.261 [2024-12-09T04:28:37.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:01.261 [2024-12-09T04:28:37.907Z] =================================================================================================================== 00:34:01.261 [2024-12-09T04:28:37.907Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:01.261 05:28:37 keyring_file -- common/autotest_common.sh@978 -- # wait 3859282 00:34:01.261 05:28:37 keyring_file -- keyring/file.sh@21 -- # killprocess 3857732 00:34:01.261 05:28:37 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3857732 ']' 00:34:01.261 05:28:37 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3857732 00:34:01.261 05:28:37 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:01.261 05:28:37 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:01.261 05:28:37 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3857732 00:34:01.520 05:28:37 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:01.520 05:28:37 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:01.520 05:28:37 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3857732' 00:34:01.520 killing process with pid 3857732 00:34:01.520 05:28:37 keyring_file -- common/autotest_common.sh@973 -- # kill 3857732 00:34:01.520 05:28:37 keyring_file -- common/autotest_common.sh@978 -- # wait 3857732 00:34:01.779 00:34:01.779 real 0m11.807s 00:34:01.779 user 0m28.958s 00:34:01.779 sys 0m2.731s 00:34:01.779 05:28:38 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:01.779 05:28:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:01.779 ************************************ 00:34:01.779 END TEST keyring_file 00:34:01.779 ************************************ 00:34:01.779 05:28:38 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:34:01.779 05:28:38 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:01.779 05:28:38 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:01.779 05:28:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:01.779 05:28:38 -- common/autotest_common.sh@10 -- # set +x 00:34:01.779 ************************************ 00:34:01.779 START TEST keyring_linux 00:34:01.779 ************************************ 00:34:01.779 05:28:38 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:01.779 Joined session keyring: 153838827 00:34:01.779 * Looking for test storage... 00:34:01.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:01.779 05:28:38 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:01.779 05:28:38 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:34:01.779 05:28:38 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:02.038 05:28:38 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@345 -- # : 1 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@368 -- # return 0 00:34:02.038 05:28:38 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:02.038 05:28:38 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:02.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.038 --rc genhtml_branch_coverage=1 00:34:02.038 --rc genhtml_function_coverage=1 00:34:02.038 --rc genhtml_legend=1 00:34:02.038 --rc geninfo_all_blocks=1 00:34:02.038 --rc geninfo_unexecuted_blocks=1 00:34:02.038 00:34:02.038 ' 00:34:02.038 05:28:38 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:02.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.038 --rc genhtml_branch_coverage=1 00:34:02.038 --rc genhtml_function_coverage=1 00:34:02.038 --rc genhtml_legend=1 00:34:02.038 --rc geninfo_all_blocks=1 00:34:02.038 --rc geninfo_unexecuted_blocks=1 00:34:02.038 00:34:02.038 ' 00:34:02.038 05:28:38 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:02.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.038 --rc genhtml_branch_coverage=1 00:34:02.038 --rc genhtml_function_coverage=1 00:34:02.038 --rc genhtml_legend=1 00:34:02.038 --rc geninfo_all_blocks=1 00:34:02.038 --rc geninfo_unexecuted_blocks=1 00:34:02.038 00:34:02.038 ' 00:34:02.038 05:28:38 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:02.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.038 --rc genhtml_branch_coverage=1 00:34:02.038 --rc genhtml_function_coverage=1 00:34:02.038 --rc genhtml_legend=1 00:34:02.038 --rc geninfo_all_blocks=1 00:34:02.038 --rc geninfo_unexecuted_blocks=1 00:34:02.038 00:34:02.038 ' 00:34:02.038 05:28:38 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:02.038 05:28:38 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:02.038 05:28:38 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:02.038 05:28:38 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.038 05:28:38 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.038 05:28:38 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.038 05:28:38 keyring_linux -- paths/export.sh@5 -- # export PATH 00:34:02.038 05:28:38 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:02.038 05:28:38 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:02.039 05:28:38 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:02.039 05:28:38 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:02.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:02.039 05:28:38 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:02.039 05:28:38 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:02.039 05:28:38 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:02.039 05:28:38 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:02.039 05:28:38 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:02.039 05:28:38 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:02.039 05:28:38 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:34:02.039 05:28:38 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:34:02.039 05:28:38 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:34:02.039 05:28:38 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:34:02.039 05:28:38 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:02.039 05:28:38 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:34:02.039 05:28:38 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:02.039 05:28:38 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:02.039 05:28:38 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:34:02.039 05:28:38 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:02.039 05:28:38 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:02.039 05:28:38 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:34:02.039 05:28:38 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:02.039 05:28:38 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:02.039 05:28:38 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:34:02.039 05:28:38 keyring_linux -- nvmf/common.sh@733 -- # python - 00:34:02.039 05:28:38 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:34:02.039 05:28:38 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:34:02.039 /tmp/:spdk-test:key0 00:34:02.039 05:28:38 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:34:02.039 05:28:38 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:02.039 05:28:38 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:34:02.039 05:28:38 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:02.039 05:28:38 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:02.039 05:28:38 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:34:02.039 05:28:38 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:02.039 05:28:38 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:02.039 05:28:38 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:34:02.039 05:28:38 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:02.039 05:28:38 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:02.039 05:28:38 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:34:02.039 05:28:38 keyring_linux -- nvmf/common.sh@733 -- # python - 00:34:02.039 05:28:38 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:34:02.039 05:28:38 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:34:02.039 /tmp/:spdk-test:key1 00:34:02.039 05:28:38 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3859837 00:34:02.039 05:28:38 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3859837 00:34:02.039 05:28:38 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3859837 ']' 00:34:02.039 05:28:38 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:02.039 05:28:38 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:02.039 05:28:38 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:02.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:02.039 05:28:38 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:02.039 05:28:38 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:02.039 05:28:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:02.039 [2024-12-09 05:28:38.649756] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:34:02.039 [2024-12-09 05:28:38.649808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3859837 ] 00:34:02.298 [2024-12-09 05:28:38.714528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:02.298 [2024-12-09 05:28:38.757013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:02.557 05:28:38 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:02.557 05:28:38 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:34:02.557 05:28:38 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:34:02.557 05:28:38 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.557 05:28:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:02.557 [2024-12-09 05:28:38.967575] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:02.557 null0 00:34:02.557 [2024-12-09 05:28:38.999643] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:02.557 [2024-12-09 05:28:39.000010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:02.557 05:28:39 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.557 05:28:39 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:34:02.557 592293782 00:34:02.557 05:28:39 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:34:02.557 781425836 00:34:02.557 05:28:39 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3859845 00:34:02.557 05:28:39 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3859845 /var/tmp/bperf.sock 00:34:02.557 05:28:39 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3859845 ']' 00:34:02.557 05:28:39 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:02.557 05:28:39 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:02.557 05:28:39 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:02.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:02.557 05:28:39 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:34:02.557 05:28:39 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:02.557 05:28:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:02.557 [2024-12-09 05:28:39.071992] Starting SPDK v25.01-pre git sha1 421ce3854 / DPDK 24.03.0 initialization... 00:34:02.557 [2024-12-09 05:28:39.072039] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3859845 ] 00:34:02.557 [2024-12-09 05:28:39.136309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:02.557 [2024-12-09 05:28:39.176726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:02.815 05:28:39 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:02.815 05:28:39 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:34:02.815 05:28:39 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:34:02.815 05:28:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:34:02.815 05:28:39 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:34:02.815 05:28:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:03.073 05:28:39 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:03.073 05:28:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:03.331 [2024-12-09 05:28:39.842094] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:03.331 nvme0n1 00:34:03.331 05:28:39 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:34:03.331 05:28:39 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:34:03.331 05:28:39 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:03.331 05:28:39 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:03.331 05:28:39 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:03.331 05:28:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:03.589 05:28:40 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:34:03.589 05:28:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:03.589 05:28:40 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:34:03.589 05:28:40 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:34:03.589 05:28:40 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:03.589 05:28:40 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:34:03.589 05:28:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:03.848 05:28:40 keyring_linux -- keyring/linux.sh@25 -- # sn=592293782 00:34:03.848 05:28:40 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:34:03.848 05:28:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:03.848 05:28:40 keyring_linux -- keyring/linux.sh@26 -- # [[ 592293782 == \5\9\2\2\9\3\7\8\2 ]] 00:34:03.848 05:28:40 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 592293782 00:34:03.848 05:28:40 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:34:03.848 05:28:40 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:03.848 Running I/O for 1 seconds... 00:34:05.223 17340.00 IOPS, 67.73 MiB/s 00:34:05.223 Latency(us) 00:34:05.223 [2024-12-09T04:28:41.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:05.223 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:05.223 nvme0n1 : 1.01 17340.87 67.74 0.00 0.00 7353.05 3604.48 10428.77 00:34:05.223 [2024-12-09T04:28:41.869Z] =================================================================================================================== 00:34:05.223 [2024-12-09T04:28:41.869Z] Total : 17340.87 67.74 0.00 0.00 7353.05 3604.48 10428.77 00:34:05.223 { 00:34:05.223 "results": [ 00:34:05.223 { 00:34:05.223 "job": "nvme0n1", 00:34:05.223 "core_mask": "0x2", 00:34:05.223 "workload": "randread", 00:34:05.223 "status": "finished", 00:34:05.223 "queue_depth": 128, 00:34:05.223 "io_size": 4096, 00:34:05.223 "runtime": 1.007331, 00:34:05.223 "iops": 17340.874052322426, 00:34:05.223 "mibps": 67.73778926688448, 00:34:05.223 "io_failed": 0, 00:34:05.223 "io_timeout": 0, 00:34:05.223 "avg_latency_us": 7353.0538389701405, 00:34:05.223 "min_latency_us": 3604.48, 00:34:05.223 "max_latency_us": 10428.772173913043 00:34:05.223 } 00:34:05.223 ], 00:34:05.223 "core_count": 1 00:34:05.223 } 00:34:05.223 05:28:41 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:05.223 05:28:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:05.223 05:28:41 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:34:05.223 05:28:41 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:34:05.223 05:28:41 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:05.223 05:28:41 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:05.223 05:28:41 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:05.223 05:28:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:05.223 05:28:41 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:34:05.223 05:28:41 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:05.223 05:28:41 keyring_linux -- keyring/linux.sh@23 -- # return 00:34:05.223 05:28:41 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:05.223 05:28:41 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:34:05.223 05:28:41 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:05.223 05:28:41 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:05.223 05:28:41 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:05.223 05:28:41 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:05.482 05:28:41 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:05.482 05:28:41 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:05.482 05:28:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:05.482 [2024-12-09 05:28:42.049249] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:05.482 [2024-12-09 05:28:42.049987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a4fa0 (107): Transport endpoint is not connected 00:34:05.482 [2024-12-09 05:28:42.050982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a4fa0 (9): Bad file descriptor 00:34:05.482 [2024-12-09 05:28:42.051983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:34:05.482 [2024-12-09 05:28:42.051994] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:05.482 [2024-12-09 05:28:42.052008] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:05.482 [2024-12-09 05:28:42.052017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:34:05.482 request: 00:34:05.482 { 00:34:05.482 "name": "nvme0", 00:34:05.482 "trtype": "tcp", 00:34:05.482 "traddr": "127.0.0.1", 00:34:05.482 "adrfam": "ipv4", 00:34:05.482 "trsvcid": "4420", 00:34:05.482 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:05.482 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:05.482 "prchk_reftag": false, 00:34:05.482 "prchk_guard": false, 00:34:05.482 "hdgst": false, 00:34:05.482 "ddgst": false, 00:34:05.482 "psk": ":spdk-test:key1", 00:34:05.482 "allow_unrecognized_csi": false, 00:34:05.482 "method": "bdev_nvme_attach_controller", 00:34:05.482 "req_id": 1 00:34:05.482 } 00:34:05.482 Got JSON-RPC error response 00:34:05.482 response: 00:34:05.482 { 00:34:05.482 "code": -5, 00:34:05.482 "message": "Input/output error" 00:34:05.482 } 00:34:05.482 05:28:42 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:34:05.482 05:28:42 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:05.482 05:28:42 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:05.482 05:28:42 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:05.482 05:28:42 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:34:05.482 05:28:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:05.482 05:28:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:34:05.482 05:28:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:34:05.482 05:28:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:34:05.482 05:28:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:05.482 05:28:42 keyring_linux -- keyring/linux.sh@33 -- # sn=592293782 00:34:05.482 05:28:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 592293782 00:34:05.482 1 links removed 00:34:05.482 05:28:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:05.482 05:28:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:34:05.482 05:28:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:34:05.482 05:28:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:34:05.482 05:28:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:34:05.482 05:28:42 keyring_linux -- keyring/linux.sh@33 -- # sn=781425836 00:34:05.482 05:28:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 781425836 00:34:05.482 1 links removed 00:34:05.482 05:28:42 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3859845 00:34:05.482 05:28:42 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3859845 ']' 00:34:05.482 05:28:42 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3859845 00:34:05.482 05:28:42 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:34:05.482 05:28:42 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:05.482 05:28:42 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3859845 00:34:05.741 05:28:42 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:05.741 05:28:42 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:05.741 05:28:42 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3859845' 00:34:05.741 killing process with pid 3859845 00:34:05.741 05:28:42 keyring_linux -- common/autotest_common.sh@973 -- # kill 3859845 00:34:05.741 Received shutdown signal, test time was about 1.000000 seconds 00:34:05.741 00:34:05.741 Latency(us) 00:34:05.741 [2024-12-09T04:28:42.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:05.741 [2024-12-09T04:28:42.387Z] =================================================================================================================== 00:34:05.741 [2024-12-09T04:28:42.388Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:05.742 05:28:42 keyring_linux -- common/autotest_common.sh@978 -- # wait 3859845 00:34:05.742 05:28:42 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3859837 00:34:05.742 05:28:42 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3859837 ']' 00:34:05.742 05:28:42 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3859837 00:34:05.742 05:28:42 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:34:05.742 05:28:42 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:05.742 05:28:42 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3859837 00:34:05.742 05:28:42 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:05.742 05:28:42 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:05.742 05:28:42 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3859837' 00:34:05.742 killing process with pid 3859837 00:34:05.742 05:28:42 keyring_linux -- common/autotest_common.sh@973 -- # kill 3859837 00:34:05.742 05:28:42 keyring_linux -- common/autotest_common.sh@978 -- # wait 3859837 00:34:06.308 00:34:06.308 real 0m4.394s 00:34:06.308 user 0m7.874s 00:34:06.308 sys 0m1.576s 00:34:06.308 05:28:42 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:06.308 05:28:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:06.308 ************************************ 00:34:06.308 END TEST keyring_linux 00:34:06.308 ************************************ 00:34:06.308 05:28:42 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:34:06.308 05:28:42 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:34:06.308 05:28:42 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:34:06.308 05:28:42 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:34:06.308 05:28:42 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:34:06.308 05:28:42 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:34:06.308 05:28:42 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:06.308 05:28:42 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:06.308 05:28:42 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:06.308 05:28:42 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:06.308 05:28:42 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:06.308 05:28:42 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:06.308 05:28:42 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:06.308 05:28:42 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:06.308 05:28:42 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:06.309 05:28:42 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:06.309 05:28:42 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:06.309 05:28:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:06.309 05:28:42 -- common/autotest_common.sh@10 -- # set +x 00:34:06.309 05:28:42 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:06.309 05:28:42 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:06.309 05:28:42 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:06.309 05:28:42 -- common/autotest_common.sh@10 -- # set +x 00:34:11.565 INFO: APP EXITING 00:34:11.565 INFO: killing all VMs 00:34:11.565 INFO: killing vhost app 00:34:11.565 INFO: EXIT DONE 00:34:13.462 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:34:13.462 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:34:13.462 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:34:13.462 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:34:13.462 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:34:13.462 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:34:13.462 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:34:13.462 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:34:13.462 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:34:13.462 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:34:13.462 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:34:13.462 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:34:13.462 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:34:13.462 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:34:13.462 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:34:13.462 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:34:13.721 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:34:16.252 Cleaning 00:34:16.252 Removing: /var/run/dpdk/spdk0/config 00:34:16.252 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:16.252 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:16.252 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:16.252 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:16.252 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:16.252 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:16.252 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:16.252 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:16.252 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:16.252 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:16.252 Removing: /var/run/dpdk/spdk1/config 00:34:16.252 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:16.252 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:16.252 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:16.252 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:16.252 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:16.252 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:16.252 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:16.252 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:16.252 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:16.252 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:16.252 Removing: /var/run/dpdk/spdk2/config 00:34:16.252 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:16.252 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:16.252 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:16.252 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:16.252 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:16.252 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:16.252 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:16.252 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:16.252 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:16.252 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:16.252 Removing: /var/run/dpdk/spdk3/config 00:34:16.252 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:16.252 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:16.252 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:16.252 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:16.252 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:16.252 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:16.252 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:16.252 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:16.252 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:16.252 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:16.252 Removing: /var/run/dpdk/spdk4/config 00:34:16.252 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:16.252 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:16.252 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:16.252 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:16.252 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:16.252 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:16.252 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:16.252 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:16.252 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:16.252 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:16.252 Removing: /dev/shm/bdev_svc_trace.1 00:34:16.252 Removing: /dev/shm/nvmf_trace.0 00:34:16.252 Removing: /dev/shm/spdk_tgt_trace.pid3384620 00:34:16.252 Removing: /var/run/dpdk/spdk0 00:34:16.252 Removing: /var/run/dpdk/spdk1 00:34:16.252 Removing: /var/run/dpdk/spdk2 00:34:16.512 Removing: /var/run/dpdk/spdk3 00:34:16.512 Removing: /var/run/dpdk/spdk4 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3382494 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3383543 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3384620 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3385253 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3386192 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3386222 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3387264 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3387424 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3387778 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3389300 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3390568 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3390858 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3391147 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3391419 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3391558 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3391782 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3392034 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3392321 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3393059 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3396091 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3396315 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3396572 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3396598 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3397073 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3397076 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3397573 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3397738 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3398058 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3398068 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3398323 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3398338 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3398899 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3399147 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3399446 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3403149 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3407410 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3417514 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3418253 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3422917 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3423175 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3427433 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3433315 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3435922 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3446123 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3455058 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3456891 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3457821 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3475098 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3479043 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3524216 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3529460 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3535231 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3541660 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3541715 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3542573 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3543348 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3544256 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3544933 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3544944 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3545174 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3545404 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3545407 00:34:16.512 Removing: /var/run/dpdk/spdk_pid3546318 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3547193 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3548016 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3548620 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3548635 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3548865 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3550087 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3551076 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3559169 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3588292 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3592807 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3594533 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3596278 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3596506 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3596718 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3596807 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3597391 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3599617 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3600428 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3600880 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3603179 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3603588 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3604196 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3608459 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3613682 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3613683 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3613684 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3617418 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3625734 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3629682 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3635679 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3636837 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3638375 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3639686 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3644291 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3649014 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3652943 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3660180 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3660321 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3664894 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3665125 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3665361 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3665781 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3665823 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3670129 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3670671 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3674992 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3677754 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3682925 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3688487 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3697571 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3704793 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3704796 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3723661 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3724251 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3724722 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3725301 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3725947 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3726620 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3727100 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3727634 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3731817 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3732050 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3738111 00:34:16.772 Removing: /var/run/dpdk/spdk_pid3738388 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3743771 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3748382 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3758125 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3758595 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3762847 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3763099 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3767340 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3772973 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3775559 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3785490 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3794669 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3796275 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3797188 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3813311 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3817112 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3819799 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3827311 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3827329 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3832341 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3834307 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3836283 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3837461 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3839864 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3841102 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3849611 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3850077 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3850539 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3852812 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3853389 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3853938 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3857732 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3857763 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3859282 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3859837 00:34:17.031 Removing: /var/run/dpdk/spdk_pid3859845 00:34:17.031 Clean 00:34:17.031 05:28:53 -- common/autotest_common.sh@1453 -- # return 0 00:34:17.031 05:28:53 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:17.031 05:28:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:17.031 05:28:53 -- common/autotest_common.sh@10 -- # set +x 00:34:17.031 05:28:53 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:17.031 05:28:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:17.031 05:28:53 -- common/autotest_common.sh@10 -- # set +x 00:34:17.289 05:28:53 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:17.289 05:28:53 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:17.289 05:28:53 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:17.290 05:28:53 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:17.290 05:28:53 -- spdk/autotest.sh@398 -- # hostname 00:34:17.290 05:28:53 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:17.290 geninfo: WARNING: invalid characters removed from testname! 00:34:39.210 05:29:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:41.112 05:29:17 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:43.011 05:29:19 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:44.904 05:29:21 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:46.801 05:29:23 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:48.732 05:29:25 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:50.796 05:29:27 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:50.796 05:29:27 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:50.796 05:29:27 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:34:50.796 05:29:27 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:50.796 05:29:27 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:50.796 05:29:27 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:50.796 + [[ -n 3305475 ]] 00:34:50.796 + sudo kill 3305475 00:34:50.806 [Pipeline] } 00:34:50.823 [Pipeline] // stage 00:34:50.829 [Pipeline] } 00:34:50.844 [Pipeline] // timeout 00:34:50.849 [Pipeline] } 00:34:50.864 [Pipeline] // catchError 00:34:50.870 [Pipeline] } 00:34:50.885 [Pipeline] // wrap 00:34:50.892 [Pipeline] } 00:34:50.905 [Pipeline] // catchError 00:34:50.915 [Pipeline] stage 00:34:50.917 [Pipeline] { (Epilogue) 00:34:50.931 [Pipeline] catchError 00:34:50.933 [Pipeline] { 00:34:50.947 [Pipeline] echo 00:34:50.949 Cleanup processes 00:34:50.955 [Pipeline] sh 00:34:51.241 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:51.241 3870218 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:51.256 [Pipeline] sh 00:34:51.540 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:51.540 ++ grep -v 'sudo pgrep' 00:34:51.540 ++ awk '{print $1}' 00:34:51.540 + sudo kill -9 00:34:51.540 + true 00:34:51.553 [Pipeline] sh 00:34:51.838 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:04.047 [Pipeline] sh 00:35:04.332 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:04.332 Artifacts sizes are good 00:35:04.347 [Pipeline] archiveArtifacts 00:35:04.355 Archiving artifacts 00:35:04.477 [Pipeline] sh 00:35:04.762 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:04.778 [Pipeline] cleanWs 00:35:04.789 [WS-CLEANUP] Deleting project workspace... 00:35:04.789 [WS-CLEANUP] Deferred wipeout is used... 00:35:04.796 [WS-CLEANUP] done 00:35:04.798 [Pipeline] } 00:35:04.815 [Pipeline] // catchError 00:35:04.825 [Pipeline] sh 00:35:05.121 + logger -p user.info -t JENKINS-CI 00:35:05.128 [Pipeline] } 00:35:05.140 [Pipeline] // stage 00:35:05.145 [Pipeline] } 00:35:05.158 [Pipeline] // node 00:35:05.163 [Pipeline] End of Pipeline 00:35:05.254 Finished: SUCCESS